Test Report: KVM_Linux_crio 21764

                    
                      d8ceda1a406080ee928dec4912f2c0ffeefd6083:2025-10-18:41957
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 165.36
244 TestPreload 131.49
289 TestPause/serial/SecondStartNoReconfiguration 84.86
x
+
TestAddons/parallel/Ingress (165.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-281483 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-281483 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-281483 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.031119444s
I1018 09:00:13.102065  108373 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-281483 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.99159329s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-281483 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.144
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-281483 -n addons-281483
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 logs -n 25: (1.387211645s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-425706                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-425706 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │ 18 Oct 25 08:55 UTC │
	│ start   │ --download-only -p binary-mirror-232384 --alsologtostderr --binary-mirror http://127.0.0.1:36103 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-232384 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │                     │
	│ delete  │ -p binary-mirror-232384                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-232384 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │ 18 Oct 25 08:55 UTC │
	│ addons  │ disable dashboard -p addons-281483                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-281483                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │                     │
	│ start   │ -p addons-281483 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ enable headlamp -p addons-281483 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ ip      │ addons-281483 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ ssh     │ addons-281483 ssh cat /opt/local-path-provisioner/pvc-508c2e42-fca4-46f4-88c0-fd619d317595_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 08:59 UTC │
	│ addons  │ addons-281483 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 08:59 UTC │ 18 Oct 25 09:00 UTC │
	│ addons  │ addons-281483 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ addons  │ addons-281483 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ ssh     │ addons-281483 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-281483                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ addons  │ addons-281483 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ addons  │ addons-281483 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ addons  │ addons-281483 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:00 UTC │ 18 Oct 25 09:00 UTC │
	│ ip      │ addons-281483 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-281483        │ jenkins │ v1.37.0 │ 18 Oct 25 09:02 UTC │ 18 Oct 25 09:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:55:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:55:57.847371  109098 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:55:57.847619  109098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:57.847627  109098 out.go:374] Setting ErrFile to fd 2...
	I1018 08:55:57.847631  109098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:57.847847  109098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 08:55:57.848371  109098 out.go:368] Setting JSON to false
	I1018 08:55:57.849193  109098 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2298,"bootTime":1760775460,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:55:57.849285  109098 start.go:141] virtualization: kvm guest
	I1018 08:55:57.851178  109098 out.go:179] * [addons-281483] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:55:57.852636  109098 notify.go:220] Checking for updates...
	I1018 08:55:57.852655  109098 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 08:55:57.854073  109098 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:55:57.855495  109098 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 08:55:57.856729  109098 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 08:55:57.858088  109098 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:55:57.859347  109098 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:55:57.860740  109098 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:55:57.891187  109098 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 08:55:57.892675  109098 start.go:305] selected driver: kvm2
	I1018 08:55:57.892693  109098 start.go:925] validating driver "kvm2" against <nil>
	I1018 08:55:57.892711  109098 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:55:57.893387  109098 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:57.893516  109098 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:57.907834  109098 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:57.907867  109098 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:57.922221  109098 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:57.922274  109098 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:55:57.922557  109098 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:55:57.922593  109098 cni.go:84] Creating CNI manager for ""
	I1018 08:55:57.922653  109098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:55:57.922666  109098 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 08:55:57.922745  109098 start.go:349] cluster config:
	{Name:addons-281483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-281483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1018 08:55:57.922869  109098 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:57.924728  109098 out.go:179] * Starting "addons-281483" primary control-plane node in "addons-281483" cluster
	I1018 08:55:57.925970  109098 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:55:57.926184  109098 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:55:57.926201  109098 cache.go:58] Caching tarball of preloaded images
	I1018 08:55:57.926351  109098 preload.go:233] Found /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:55:57.926361  109098 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:55:57.927064  109098 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/config.json ...
	I1018 08:55:57.927091  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/config.json: {Name:mka068a19d31459d7d9fa73ff7c53758cb08aed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:55:57.927283  109098 start.go:360] acquireMachinesLock for addons-281483: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 08:55:57.927334  109098 start.go:364] duration metric: took 36.131µs to acquireMachinesLock for "addons-281483"
	I1018 08:55:57.927359  109098 start.go:93] Provisioning new machine with config: &{Name:addons-281483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-281483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:55:57.927490  109098 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 08:55:57.929207  109098 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1018 08:55:57.929379  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:55:57.929420  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:55:57.942587  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1018 08:55:57.943033  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:55:57.943619  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:55:57.943643  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:55:57.944004  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:55:57.944262  109098 main.go:141] libmachine: (addons-281483) Calling .GetMachineName
	I1018 08:55:57.944401  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:55:57.944573  109098 start.go:159] libmachine.API.Create for "addons-281483" (driver="kvm2")
	I1018 08:55:57.944600  109098 client.go:168] LocalClient.Create starting
	I1018 08:55:57.944653  109098 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem
	I1018 08:55:58.078814  109098 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem
	I1018 08:55:58.323173  109098 main.go:141] libmachine: Running pre-create checks...
	I1018 08:55:58.323198  109098 main.go:141] libmachine: (addons-281483) Calling .PreCreateCheck
	I1018 08:55:58.323640  109098 main.go:141] libmachine: (addons-281483) Calling .GetConfigRaw
	I1018 08:55:58.324073  109098 main.go:141] libmachine: Creating machine...
	I1018 08:55:58.324089  109098 main.go:141] libmachine: (addons-281483) Calling .Create
	I1018 08:55:58.324255  109098 main.go:141] libmachine: (addons-281483) creating domain...
	I1018 08:55:58.324276  109098 main.go:141] libmachine: (addons-281483) creating network...
	I1018 08:55:58.325841  109098 main.go:141] libmachine: (addons-281483) DBG | found existing default network
	I1018 08:55:58.326053  109098 main.go:141] libmachine: (addons-281483) DBG | <network>
	I1018 08:55:58.326079  109098 main.go:141] libmachine: (addons-281483) DBG |   <name>default</name>
	I1018 08:55:58.326109  109098 main.go:141] libmachine: (addons-281483) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 08:55:58.326134  109098 main.go:141] libmachine: (addons-281483) DBG |   <forward mode='nat'>
	I1018 08:55:58.326159  109098 main.go:141] libmachine: (addons-281483) DBG |     <nat>
	I1018 08:55:58.326168  109098 main.go:141] libmachine: (addons-281483) DBG |       <port start='1024' end='65535'/>
	I1018 08:55:58.326177  109098 main.go:141] libmachine: (addons-281483) DBG |     </nat>
	I1018 08:55:58.326192  109098 main.go:141] libmachine: (addons-281483) DBG |   </forward>
	I1018 08:55:58.326205  109098 main.go:141] libmachine: (addons-281483) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 08:55:58.326215  109098 main.go:141] libmachine: (addons-281483) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 08:55:58.326256  109098 main.go:141] libmachine: (addons-281483) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 08:55:58.326284  109098 main.go:141] libmachine: (addons-281483) DBG |     <dhcp>
	I1018 08:55:58.326297  109098 main.go:141] libmachine: (addons-281483) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 08:55:58.326310  109098 main.go:141] libmachine: (addons-281483) DBG |     </dhcp>
	I1018 08:55:58.326322  109098 main.go:141] libmachine: (addons-281483) DBG |   </ip>
	I1018 08:55:58.326330  109098 main.go:141] libmachine: (addons-281483) DBG | </network>
	I1018 08:55:58.326341  109098 main.go:141] libmachine: (addons-281483) DBG | 
	I1018 08:55:58.326823  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:55:58.326678  109126 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013800}
	I1018 08:55:58.326883  109098 main.go:141] libmachine: (addons-281483) DBG | defining private network:
	I1018 08:55:58.326908  109098 main.go:141] libmachine: (addons-281483) DBG | 
	I1018 08:55:58.326934  109098 main.go:141] libmachine: (addons-281483) DBG | <network>
	I1018 08:55:58.326951  109098 main.go:141] libmachine: (addons-281483) DBG |   <name>mk-addons-281483</name>
	I1018 08:55:58.326962  109098 main.go:141] libmachine: (addons-281483) DBG |   <dns enable='no'/>
	I1018 08:55:58.326971  109098 main.go:141] libmachine: (addons-281483) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 08:55:58.326983  109098 main.go:141] libmachine: (addons-281483) DBG |     <dhcp>
	I1018 08:55:58.326990  109098 main.go:141] libmachine: (addons-281483) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 08:55:58.326995  109098 main.go:141] libmachine: (addons-281483) DBG |     </dhcp>
	I1018 08:55:58.327002  109098 main.go:141] libmachine: (addons-281483) DBG |   </ip>
	I1018 08:55:58.327027  109098 main.go:141] libmachine: (addons-281483) DBG | </network>
	I1018 08:55:58.327047  109098 main.go:141] libmachine: (addons-281483) DBG | 
	I1018 08:55:58.403219  109098 main.go:141] libmachine: (addons-281483) DBG | creating private network mk-addons-281483 192.168.39.0/24...
	I1018 08:55:58.473098  109098 main.go:141] libmachine: (addons-281483) DBG | private network mk-addons-281483 192.168.39.0/24 created
	I1018 08:55:58.473500  109098 main.go:141] libmachine: (addons-281483) setting up store path in /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483 ...
	I1018 08:55:58.473530  109098 main.go:141] libmachine: (addons-281483) building disk image from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 08:55:58.473540  109098 main.go:141] libmachine: (addons-281483) DBG | <network>
	I1018 08:55:58.473555  109098 main.go:141] libmachine: (addons-281483) DBG |   <name>mk-addons-281483</name>
	I1018 08:55:58.473572  109098 main.go:141] libmachine: (addons-281483) DBG |   <uuid>9c6fb49a-1e16-43ff-b02c-7b55d002734a</uuid>
	I1018 08:55:58.473594  109098 main.go:141] libmachine: (addons-281483) Downloading /home/jenkins/minikube-integration/21764-104457/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 08:55:58.473650  109098 main.go:141] libmachine: (addons-281483) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1018 08:55:58.473683  109098 main.go:141] libmachine: (addons-281483) DBG |   <mac address='52:54:00:a1:c5:15'/>
	I1018 08:55:58.473698  109098 main.go:141] libmachine: (addons-281483) DBG |   <dns enable='no'/>
	I1018 08:55:58.473711  109098 main.go:141] libmachine: (addons-281483) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 08:55:58.473723  109098 main.go:141] libmachine: (addons-281483) DBG |     <dhcp>
	I1018 08:55:58.473734  109098 main.go:141] libmachine: (addons-281483) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 08:55:58.473743  109098 main.go:141] libmachine: (addons-281483) DBG |     </dhcp>
	I1018 08:55:58.473752  109098 main.go:141] libmachine: (addons-281483) DBG |   </ip>
	I1018 08:55:58.473829  109098 main.go:141] libmachine: (addons-281483) DBG | </network>
	I1018 08:55:58.473877  109098 main.go:141] libmachine: (addons-281483) DBG | 
	I1018 08:55:58.473930  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:55:58.473424  109126 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 08:55:58.735102  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:55:58.734973  109126 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa...
	I1018 08:55:58.997702  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:55:58.997547  109126 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/addons-281483.rawdisk...
	I1018 08:55:58.997737  109098 main.go:141] libmachine: (addons-281483) DBG | Writing magic tar header
	I1018 08:55:58.997760  109098 main.go:141] libmachine: (addons-281483) DBG | Writing SSH key tar header
	I1018 08:55:58.997772  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:55:58.997695  109126 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483 ...
	I1018 08:55:58.997893  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483 (perms=drwx------)
	I1018 08:55:58.997933  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines (perms=drwxr-xr-x)
	I1018 08:55:58.997946  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483
	I1018 08:55:58.997963  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines
	I1018 08:55:58.997977  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 08:55:58.997991  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube (perms=drwxr-xr-x)
	I1018 08:55:58.998003  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins/minikube-integration/21764-104457 (perms=drwxrwxr-x)
	I1018 08:55:58.998009  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 08:55:58.998019  109098 main.go:141] libmachine: (addons-281483) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 08:55:58.998027  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457
	I1018 08:55:58.998038  109098 main.go:141] libmachine: (addons-281483) defining domain...
	I1018 08:55:58.998055  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 08:55:58.998064  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home/jenkins
	I1018 08:55:58.998080  109098 main.go:141] libmachine: (addons-281483) DBG | checking permissions on dir: /home
	I1018 08:55:58.998091  109098 main.go:141] libmachine: (addons-281483) DBG | skipping /home - not owner
	I1018 08:55:58.999214  109098 main.go:141] libmachine: (addons-281483) defining domain using XML: 
	I1018 08:55:58.999239  109098 main.go:141] libmachine: (addons-281483) <domain type='kvm'>
	I1018 08:55:58.999250  109098 main.go:141] libmachine: (addons-281483)   <name>addons-281483</name>
	I1018 08:55:58.999260  109098 main.go:141] libmachine: (addons-281483)   <memory unit='MiB'>4096</memory>
	I1018 08:55:58.999268  109098 main.go:141] libmachine: (addons-281483)   <vcpu>2</vcpu>
	I1018 08:55:58.999275  109098 main.go:141] libmachine: (addons-281483)   <features>
	I1018 08:55:58.999282  109098 main.go:141] libmachine: (addons-281483)     <acpi/>
	I1018 08:55:58.999289  109098 main.go:141] libmachine: (addons-281483)     <apic/>
	I1018 08:55:58.999294  109098 main.go:141] libmachine: (addons-281483)     <pae/>
	I1018 08:55:58.999299  109098 main.go:141] libmachine: (addons-281483)   </features>
	I1018 08:55:58.999304  109098 main.go:141] libmachine: (addons-281483)   <cpu mode='host-passthrough'>
	I1018 08:55:58.999308  109098 main.go:141] libmachine: (addons-281483)   </cpu>
	I1018 08:55:58.999313  109098 main.go:141] libmachine: (addons-281483)   <os>
	I1018 08:55:58.999318  109098 main.go:141] libmachine: (addons-281483)     <type>hvm</type>
	I1018 08:55:58.999323  109098 main.go:141] libmachine: (addons-281483)     <boot dev='cdrom'/>
	I1018 08:55:58.999327  109098 main.go:141] libmachine: (addons-281483)     <boot dev='hd'/>
	I1018 08:55:58.999332  109098 main.go:141] libmachine: (addons-281483)     <bootmenu enable='no'/>
	I1018 08:55:58.999338  109098 main.go:141] libmachine: (addons-281483)   </os>
	I1018 08:55:58.999343  109098 main.go:141] libmachine: (addons-281483)   <devices>
	I1018 08:55:58.999351  109098 main.go:141] libmachine: (addons-281483)     <disk type='file' device='cdrom'>
	I1018 08:55:58.999364  109098 main.go:141] libmachine: (addons-281483)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/boot2docker.iso'/>
	I1018 08:55:58.999376  109098 main.go:141] libmachine: (addons-281483)       <target dev='hdc' bus='scsi'/>
	I1018 08:55:58.999381  109098 main.go:141] libmachine: (addons-281483)       <readonly/>
	I1018 08:55:58.999388  109098 main.go:141] libmachine: (addons-281483)     </disk>
	I1018 08:55:58.999403  109098 main.go:141] libmachine: (addons-281483)     <disk type='file' device='disk'>
	I1018 08:55:58.999412  109098 main.go:141] libmachine: (addons-281483)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 08:55:58.999419  109098 main.go:141] libmachine: (addons-281483)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/addons-281483.rawdisk'/>
	I1018 08:55:58.999426  109098 main.go:141] libmachine: (addons-281483)       <target dev='hda' bus='virtio'/>
	I1018 08:55:58.999430  109098 main.go:141] libmachine: (addons-281483)     </disk>
	I1018 08:55:58.999435  109098 main.go:141] libmachine: (addons-281483)     <interface type='network'>
	I1018 08:55:58.999441  109098 main.go:141] libmachine: (addons-281483)       <source network='mk-addons-281483'/>
	I1018 08:55:58.999452  109098 main.go:141] libmachine: (addons-281483)       <model type='virtio'/>
	I1018 08:55:58.999457  109098 main.go:141] libmachine: (addons-281483)     </interface>
	I1018 08:55:58.999462  109098 main.go:141] libmachine: (addons-281483)     <interface type='network'>
	I1018 08:55:58.999468  109098 main.go:141] libmachine: (addons-281483)       <source network='default'/>
	I1018 08:55:58.999473  109098 main.go:141] libmachine: (addons-281483)       <model type='virtio'/>
	I1018 08:55:58.999478  109098 main.go:141] libmachine: (addons-281483)     </interface>
	I1018 08:55:58.999482  109098 main.go:141] libmachine: (addons-281483)     <serial type='pty'>
	I1018 08:55:58.999487  109098 main.go:141] libmachine: (addons-281483)       <target port='0'/>
	I1018 08:55:58.999491  109098 main.go:141] libmachine: (addons-281483)     </serial>
	I1018 08:55:58.999495  109098 main.go:141] libmachine: (addons-281483)     <console type='pty'>
	I1018 08:55:58.999501  109098 main.go:141] libmachine: (addons-281483)       <target type='serial' port='0'/>
	I1018 08:55:58.999505  109098 main.go:141] libmachine: (addons-281483)     </console>
	I1018 08:55:58.999510  109098 main.go:141] libmachine: (addons-281483)     <rng model='virtio'>
	I1018 08:55:58.999515  109098 main.go:141] libmachine: (addons-281483)       <backend model='random'>/dev/random</backend>
	I1018 08:55:58.999519  109098 main.go:141] libmachine: (addons-281483)     </rng>
	I1018 08:55:58.999523  109098 main.go:141] libmachine: (addons-281483)   </devices>
	I1018 08:55:58.999530  109098 main.go:141] libmachine: (addons-281483) </domain>
	I1018 08:55:58.999559  109098 main.go:141] libmachine: (addons-281483) 
	I1018 08:55:59.080422  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:0c:fe:ee in network default
	I1018 08:55:59.081154  109098 main.go:141] libmachine: (addons-281483) starting domain...
	I1018 08:55:59.081182  109098 main.go:141] libmachine: (addons-281483) ensuring networks are active...
	I1018 08:55:59.081192  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:55:59.082022  109098 main.go:141] libmachine: (addons-281483) Ensuring network default is active
	I1018 08:55:59.082450  109098 main.go:141] libmachine: (addons-281483) Ensuring network mk-addons-281483 is active
	I1018 08:55:59.083241  109098 main.go:141] libmachine: (addons-281483) getting domain XML...
	I1018 08:55:59.084527  109098 main.go:141] libmachine: (addons-281483) DBG | starting domain XML:
	I1018 08:55:59.084550  109098 main.go:141] libmachine: (addons-281483) DBG | <domain type='kvm'>
	I1018 08:55:59.084561  109098 main.go:141] libmachine: (addons-281483) DBG |   <name>addons-281483</name>
	I1018 08:55:59.084574  109098 main.go:141] libmachine: (addons-281483) DBG |   <uuid>9d85d66c-12e6-4b3e-aef3-8ab5ceef778c</uuid>
	I1018 08:55:59.084600  109098 main.go:141] libmachine: (addons-281483) DBG |   <memory unit='KiB'>4194304</memory>
	I1018 08:55:59.084613  109098 main.go:141] libmachine: (addons-281483) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1018 08:55:59.084622  109098 main.go:141] libmachine: (addons-281483) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 08:55:59.084629  109098 main.go:141] libmachine: (addons-281483) DBG |   <os>
	I1018 08:55:59.084640  109098 main.go:141] libmachine: (addons-281483) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 08:55:59.084649  109098 main.go:141] libmachine: (addons-281483) DBG |     <boot dev='cdrom'/>
	I1018 08:55:59.084661  109098 main.go:141] libmachine: (addons-281483) DBG |     <boot dev='hd'/>
	I1018 08:55:59.084669  109098 main.go:141] libmachine: (addons-281483) DBG |     <bootmenu enable='no'/>
	I1018 08:55:59.084678  109098 main.go:141] libmachine: (addons-281483) DBG |   </os>
	I1018 08:55:59.084693  109098 main.go:141] libmachine: (addons-281483) DBG |   <features>
	I1018 08:55:59.084704  109098 main.go:141] libmachine: (addons-281483) DBG |     <acpi/>
	I1018 08:55:59.084710  109098 main.go:141] libmachine: (addons-281483) DBG |     <apic/>
	I1018 08:55:59.084732  109098 main.go:141] libmachine: (addons-281483) DBG |     <pae/>
	I1018 08:55:59.084751  109098 main.go:141] libmachine: (addons-281483) DBG |   </features>
	I1018 08:55:59.084765  109098 main.go:141] libmachine: (addons-281483) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 08:55:59.084775  109098 main.go:141] libmachine: (addons-281483) DBG |   <clock offset='utc'/>
	I1018 08:55:59.084785  109098 main.go:141] libmachine: (addons-281483) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 08:55:59.084793  109098 main.go:141] libmachine: (addons-281483) DBG |   <on_reboot>restart</on_reboot>
	I1018 08:55:59.084798  109098 main.go:141] libmachine: (addons-281483) DBG |   <on_crash>destroy</on_crash>
	I1018 08:55:59.084805  109098 main.go:141] libmachine: (addons-281483) DBG |   <devices>
	I1018 08:55:59.084811  109098 main.go:141] libmachine: (addons-281483) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 08:55:59.084818  109098 main.go:141] libmachine: (addons-281483) DBG |     <disk type='file' device='cdrom'>
	I1018 08:55:59.084824  109098 main.go:141] libmachine: (addons-281483) DBG |       <driver name='qemu' type='raw'/>
	I1018 08:55:59.084837  109098 main.go:141] libmachine: (addons-281483) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/boot2docker.iso'/>
	I1018 08:55:59.084844  109098 main.go:141] libmachine: (addons-281483) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 08:55:59.084849  109098 main.go:141] libmachine: (addons-281483) DBG |       <readonly/>
	I1018 08:55:59.084858  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 08:55:59.084864  109098 main.go:141] libmachine: (addons-281483) DBG |     </disk>
	I1018 08:55:59.084872  109098 main.go:141] libmachine: (addons-281483) DBG |     <disk type='file' device='disk'>
	I1018 08:55:59.084879  109098 main.go:141] libmachine: (addons-281483) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 08:55:59.084891  109098 main.go:141] libmachine: (addons-281483) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/addons-281483.rawdisk'/>
	I1018 08:55:59.084899  109098 main.go:141] libmachine: (addons-281483) DBG |       <target dev='hda' bus='virtio'/>
	I1018 08:55:59.084907  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 08:55:59.084918  109098 main.go:141] libmachine: (addons-281483) DBG |     </disk>
	I1018 08:55:59.084960  109098 main.go:141] libmachine: (addons-281483) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 08:55:59.084997  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 08:55:59.085014  109098 main.go:141] libmachine: (addons-281483) DBG |     </controller>
	I1018 08:55:59.085024  109098 main.go:141] libmachine: (addons-281483) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 08:55:59.085036  109098 main.go:141] libmachine: (addons-281483) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 08:55:59.085051  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 08:55:59.085070  109098 main.go:141] libmachine: (addons-281483) DBG |     </controller>
	I1018 08:55:59.085088  109098 main.go:141] libmachine: (addons-281483) DBG |     <interface type='network'>
	I1018 08:55:59.085098  109098 main.go:141] libmachine: (addons-281483) DBG |       <mac address='52:54:00:4f:78:29'/>
	I1018 08:55:59.085106  109098 main.go:141] libmachine: (addons-281483) DBG |       <source network='mk-addons-281483'/>
	I1018 08:55:59.085115  109098 main.go:141] libmachine: (addons-281483) DBG |       <model type='virtio'/>
	I1018 08:55:59.085124  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 08:55:59.085133  109098 main.go:141] libmachine: (addons-281483) DBG |     </interface>
	I1018 08:55:59.085152  109098 main.go:141] libmachine: (addons-281483) DBG |     <interface type='network'>
	I1018 08:55:59.085161  109098 main.go:141] libmachine: (addons-281483) DBG |       <mac address='52:54:00:0c:fe:ee'/>
	I1018 08:55:59.085170  109098 main.go:141] libmachine: (addons-281483) DBG |       <source network='default'/>
	I1018 08:55:59.085181  109098 main.go:141] libmachine: (addons-281483) DBG |       <model type='virtio'/>
	I1018 08:55:59.085191  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 08:55:59.085200  109098 main.go:141] libmachine: (addons-281483) DBG |     </interface>
	I1018 08:55:59.085209  109098 main.go:141] libmachine: (addons-281483) DBG |     <serial type='pty'>
	I1018 08:55:59.085217  109098 main.go:141] libmachine: (addons-281483) DBG |       <target type='isa-serial' port='0'>
	I1018 08:55:59.085234  109098 main.go:141] libmachine: (addons-281483) DBG |         <model name='isa-serial'/>
	I1018 08:55:59.085251  109098 main.go:141] libmachine: (addons-281483) DBG |       </target>
	I1018 08:55:59.085261  109098 main.go:141] libmachine: (addons-281483) DBG |     </serial>
	I1018 08:55:59.085273  109098 main.go:141] libmachine: (addons-281483) DBG |     <console type='pty'>
	I1018 08:55:59.085284  109098 main.go:141] libmachine: (addons-281483) DBG |       <target type='serial' port='0'/>
	I1018 08:55:59.085288  109098 main.go:141] libmachine: (addons-281483) DBG |     </console>
	I1018 08:55:59.085295  109098 main.go:141] libmachine: (addons-281483) DBG |     <input type='mouse' bus='ps2'/>
	I1018 08:55:59.085300  109098 main.go:141] libmachine: (addons-281483) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 08:55:59.085306  109098 main.go:141] libmachine: (addons-281483) DBG |     <audio id='1' type='none'/>
	I1018 08:55:59.085314  109098 main.go:141] libmachine: (addons-281483) DBG |     <memballoon model='virtio'>
	I1018 08:55:59.085331  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 08:55:59.085346  109098 main.go:141] libmachine: (addons-281483) DBG |     </memballoon>
	I1018 08:55:59.085356  109098 main.go:141] libmachine: (addons-281483) DBG |     <rng model='virtio'>
	I1018 08:55:59.085367  109098 main.go:141] libmachine: (addons-281483) DBG |       <backend model='random'>/dev/random</backend>
	I1018 08:55:59.085380  109098 main.go:141] libmachine: (addons-281483) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 08:55:59.085427  109098 main.go:141] libmachine: (addons-281483) DBG |     </rng>
	I1018 08:55:59.085443  109098 main.go:141] libmachine: (addons-281483) DBG |   </devices>
	I1018 08:55:59.085454  109098 main.go:141] libmachine: (addons-281483) DBG | </domain>
	I1018 08:55:59.085464  109098 main.go:141] libmachine: (addons-281483) DBG | 
	I1018 08:56:00.739968  109098 main.go:141] libmachine: (addons-281483) waiting for domain to start...
	I1018 08:56:00.741363  109098 main.go:141] libmachine: (addons-281483) domain is now running
	I1018 08:56:00.741393  109098 main.go:141] libmachine: (addons-281483) waiting for IP...
	I1018 08:56:00.742064  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:00.742540  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:00.742557  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:00.742781  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:00.742876  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:00.742802  109126 retry.go:31] will retry after 236.352625ms: waiting for domain to come up
	I1018 08:56:00.981500  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:00.982110  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:00.982148  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:00.982476  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:00.982533  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:00.982468  109126 retry.go:31] will retry after 279.050931ms: waiting for domain to come up
	I1018 08:56:01.262997  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:01.263421  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:01.263445  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:01.263731  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:01.263760  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:01.263714  109126 retry.go:31] will retry after 375.030928ms: waiting for domain to come up
	I1018 08:56:01.640418  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:01.640944  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:01.640967  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:01.641318  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:01.641448  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:01.641376  109126 retry.go:31] will retry after 482.496795ms: waiting for domain to come up
	I1018 08:56:02.125072  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:02.125640  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:02.125665  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:02.126047  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:02.126116  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:02.126044  109126 retry.go:31] will retry after 621.351805ms: waiting for domain to come up
	I1018 08:56:02.749102  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:02.749599  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:02.749629  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:02.749932  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:02.749964  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:02.749894  109126 retry.go:31] will retry after 608.317117ms: waiting for domain to come up
	I1018 08:56:03.359540  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:03.360310  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:03.360342  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:03.360661  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:03.360715  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:03.360662  109126 retry.go:31] will retry after 752.381463ms: waiting for domain to come up
	I1018 08:56:04.114289  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:04.114756  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:04.114784  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:04.115159  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:04.115227  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:04.115133  109126 retry.go:31] will retry after 1.317527478s: waiting for domain to come up
	I1018 08:56:05.434804  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:05.435413  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:05.435447  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:05.435686  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:05.435719  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:05.435656  109126 retry.go:31] will retry after 1.515327087s: waiting for domain to come up
	I1018 08:56:06.953518  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:06.954044  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:06.954075  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:06.954352  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:06.954392  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:06.954333  109126 retry.go:31] will retry after 1.757972901s: waiting for domain to come up
	I1018 08:56:08.714060  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:08.714697  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:08.714727  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:08.715163  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:08.715250  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:08.715149  109126 retry.go:31] will retry after 2.815152173s: waiting for domain to come up
	I1018 08:56:11.534306  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:11.534776  109098 main.go:141] libmachine: (addons-281483) DBG | no network interface addresses found for domain addons-281483 (source=lease)
	I1018 08:56:11.534798  109098 main.go:141] libmachine: (addons-281483) DBG | trying to list again with source=arp
	I1018 08:56:11.535034  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find current IP address of domain addons-281483 in network mk-addons-281483 (interfaces detected: [])
	I1018 08:56:11.535092  109098 main.go:141] libmachine: (addons-281483) DBG | I1018 08:56:11.535013  109126 retry.go:31] will retry after 3.432550027s: waiting for domain to come up
	I1018 08:56:14.970112  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:14.970813  109098 main.go:141] libmachine: (addons-281483) found domain IP: 192.168.39.144
	I1018 08:56:14.970832  109098 main.go:141] libmachine: (addons-281483) reserving static IP address...
	I1018 08:56:14.970860  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has current primary IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:14.971349  109098 main.go:141] libmachine: (addons-281483) DBG | unable to find host DHCP lease matching {name: "addons-281483", mac: "52:54:00:4f:78:29", ip: "192.168.39.144"} in network mk-addons-281483
	I1018 08:56:15.159269  109098 main.go:141] libmachine: (addons-281483) DBG | Getting to WaitForSSH function...
	I1018 08:56:15.159382  109098 main.go:141] libmachine: (addons-281483) reserved static IP address 192.168.39.144 for domain addons-281483
	I1018 08:56:15.159398  109098 main.go:141] libmachine: (addons-281483) waiting for SSH...
	I1018 08:56:15.161973  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.162436  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.162477  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.162603  109098 main.go:141] libmachine: (addons-281483) DBG | Using SSH client type: external
	I1018 08:56:15.162629  109098 main.go:141] libmachine: (addons-281483) DBG | Using SSH private key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa (-rw-------)
	I1018 08:56:15.162667  109098 main.go:141] libmachine: (addons-281483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 08:56:15.162681  109098 main.go:141] libmachine: (addons-281483) DBG | About to run SSH command:
	I1018 08:56:15.162690  109098 main.go:141] libmachine: (addons-281483) DBG | exit 0
	I1018 08:56:15.304308  109098 main.go:141] libmachine: (addons-281483) DBG | SSH cmd err, output: <nil>: 
	I1018 08:56:15.304472  109098 main.go:141] libmachine: (addons-281483) domain creation complete
	I1018 08:56:15.304869  109098 main.go:141] libmachine: (addons-281483) Calling .GetConfigRaw
	I1018 08:56:15.305583  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:15.305813  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:15.306060  109098 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 08:56:15.306080  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:15.307515  109098 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 08:56:15.307532  109098 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 08:56:15.307538  109098 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 08:56:15.307544  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:15.309780  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.310267  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.310292  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.310474  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:15.310627  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.310779  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.310963  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:15.311179  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:15.311559  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:15.311577  109098 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 08:56:15.417074  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:56:15.417106  109098 main.go:141] libmachine: Detecting the provisioner...
	I1018 08:56:15.417117  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:15.420220  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.420655  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.420694  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.420860  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:15.421079  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.421336  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.421487  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:15.421671  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:15.421870  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:15.421881  109098 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 08:56:15.527449  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 08:56:15.527544  109098 main.go:141] libmachine: found compatible host: buildroot
	I1018 08:56:15.527559  109098 main.go:141] libmachine: Provisioning with buildroot...
	I1018 08:56:15.527572  109098 main.go:141] libmachine: (addons-281483) Calling .GetMachineName
	I1018 08:56:15.527891  109098 buildroot.go:166] provisioning hostname "addons-281483"
	I1018 08:56:15.527916  109098 main.go:141] libmachine: (addons-281483) Calling .GetMachineName
	I1018 08:56:15.528173  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:15.531082  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.531510  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.531536  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.531715  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:15.531912  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.532062  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.532208  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:15.532358  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:15.532583  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:15.532604  109098 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-281483 && echo "addons-281483" | sudo tee /etc/hostname
	I1018 08:56:15.660516  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-281483
	
	I1018 08:56:15.660556  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:15.663966  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.664521  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.664547  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.664854  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:15.665125  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.665362  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:15.665500  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:15.665747  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:15.665960  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:15.665976  109098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-281483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-281483/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-281483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:56:15.785341  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:56:15.785381  109098 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 08:56:15.785428  109098 buildroot.go:174] setting up certificates
	I1018 08:56:15.785447  109098 provision.go:84] configureAuth start
	I1018 08:56:15.785463  109098 main.go:141] libmachine: (addons-281483) Calling .GetMachineName
	I1018 08:56:15.785776  109098 main.go:141] libmachine: (addons-281483) Calling .GetIP
	I1018 08:56:15.788894  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.789255  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.789281  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.789471  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:15.792124  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.793048  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:15.793072  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:15.793289  109098 provision.go:143] copyHostCerts
	I1018 08:56:15.793355  109098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 08:56:15.793469  109098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 08:56:15.793533  109098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 08:56:15.793581  109098 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.addons-281483 san=[127.0.0.1 192.168.39.144 addons-281483 localhost minikube]
	I1018 08:56:16.243911  109098 provision.go:177] copyRemoteCerts
	I1018 08:56:16.243980  109098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:56:16.244007  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.247022  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.247557  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.247585  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.247873  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.248101  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.248263  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.248462  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:16.330992  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 08:56:16.360257  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:56:16.389440  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:56:16.417244  109098 provision.go:87] duration metric: took 631.781572ms to configureAuth
	I1018 08:56:16.417277  109098 buildroot.go:189] setting minikube options for container-runtime
	I1018 08:56:16.417477  109098 config.go:182] Loaded profile config "addons-281483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:56:16.417578  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.420542  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.420855  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.420879  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.421100  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.421348  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.421518  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.421646  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.421834  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:16.422063  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:16.422083  109098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:56:16.665714  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:56:16.665746  109098 main.go:141] libmachine: Checking connection to Docker...
	I1018 08:56:16.665754  109098 main.go:141] libmachine: (addons-281483) Calling .GetURL
	I1018 08:56:16.667296  109098 main.go:141] libmachine: (addons-281483) DBG | using libvirt version 8000000
	I1018 08:56:16.670442  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.670893  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.670911  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.671241  109098 main.go:141] libmachine: Docker is up and running!
	I1018 08:56:16.671258  109098 main.go:141] libmachine: Reticulating splines...
	I1018 08:56:16.671265  109098 client.go:171] duration metric: took 18.726657635s to LocalClient.Create
	I1018 08:56:16.671288  109098 start.go:167] duration metric: took 18.726719291s to libmachine.API.Create "addons-281483"
	I1018 08:56:16.671299  109098 start.go:293] postStartSetup for "addons-281483" (driver="kvm2")
	I1018 08:56:16.671310  109098 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:56:16.671327  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:16.671589  109098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:56:16.671616  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.673788  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.674154  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.674182  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.674358  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.674554  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.674745  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.674920  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:16.758837  109098 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:56:16.763618  109098 info.go:137] Remote host: Buildroot 2025.02
	I1018 08:56:16.763652  109098 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 08:56:16.763789  109098 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 08:56:16.763863  109098 start.go:296] duration metric: took 92.556448ms for postStartSetup
	I1018 08:56:16.763906  109098 main.go:141] libmachine: (addons-281483) Calling .GetConfigRaw
	I1018 08:56:16.764581  109098 main.go:141] libmachine: (addons-281483) Calling .GetIP
	I1018 08:56:16.767348  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.767816  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.767838  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.768090  109098 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/config.json ...
	I1018 08:56:16.768297  109098 start.go:128] duration metric: took 18.840793769s to createHost
	I1018 08:56:16.768322  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.771024  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.771457  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.771498  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.771679  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.771871  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.772026  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.772162  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.772363  109098 main.go:141] libmachine: Using SSH client type: native
	I1018 08:56:16.772556  109098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1018 08:56:16.772566  109098 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 08:56:16.878591  109098 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760777776.840414014
	
	I1018 08:56:16.878617  109098 fix.go:216] guest clock: 1760777776.840414014
	I1018 08:56:16.878628  109098 fix.go:229] Guest: 2025-10-18 08:56:16.840414014 +0000 UTC Remote: 2025-10-18 08:56:16.768310418 +0000 UTC m=+18.957302397 (delta=72.103596ms)
	I1018 08:56:16.878663  109098 fix.go:200] guest clock delta is within tolerance: 72.103596ms
	I1018 08:56:16.878668  109098 start.go:83] releasing machines lock for "addons-281483", held for 18.951327204s
	I1018 08:56:16.878692  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:16.879007  109098 main.go:141] libmachine: (addons-281483) Calling .GetIP
	I1018 08:56:16.881733  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.882203  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.882232  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.882420  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:16.882919  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:16.883109  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:16.883246  109098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:56:16.883303  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.883374  109098 ssh_runner.go:195] Run: cat /version.json
	I1018 08:56:16.883399  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:16.886850  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.886906  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.887371  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.887408  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.887449  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:16.887467  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:16.887696  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.887699  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:16.887907  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.887971  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:16.888130  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.888210  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:16.888315  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:16.888396  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:16.997117  109098 ssh_runner.go:195] Run: systemctl --version
	I1018 08:56:17.003869  109098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:56:17.161450  109098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:56:17.168429  109098 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:56:17.168531  109098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:56:17.190211  109098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:56:17.190252  109098 start.go:495] detecting cgroup driver to use...
	I1018 08:56:17.190341  109098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:56:17.210415  109098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:56:17.229206  109098 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:56:17.229270  109098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:56:17.247098  109098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:56:17.263909  109098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:56:17.412082  109098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:56:17.626380  109098 docker.go:234] disabling docker service ...
	I1018 08:56:17.626467  109098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:56:17.643260  109098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:56:17.659305  109098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:56:17.818236  109098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:56:17.960822  109098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:56:17.976845  109098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:56:17.999430  109098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:56:17.999502  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.011935  109098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 08:56:18.012007  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.024619  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.037190  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.049509  109098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:56:18.063171  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.075575  109098 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.096058  109098 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:56:18.109396  109098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:56:18.119783  109098 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 08:56:18.119864  109098 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 08:56:18.139876  109098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:56:18.151588  109098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:56:18.286517  109098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:56:18.393985  109098 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:56:18.394079  109098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:56:18.399723  109098 start.go:563] Will wait 60s for crictl version
	I1018 08:56:18.399807  109098 ssh_runner.go:195] Run: which crictl
	I1018 08:56:18.403806  109098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 08:56:18.443694  109098 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 08:56:18.443789  109098 ssh_runner.go:195] Run: crio --version
	I1018 08:56:18.472972  109098 ssh_runner.go:195] Run: crio --version
	I1018 08:56:18.506091  109098 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 08:56:18.507536  109098 main.go:141] libmachine: (addons-281483) Calling .GetIP
	I1018 08:56:18.510480  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:18.510898  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:18.510939  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:18.511225  109098 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 08:56:18.515680  109098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:56:18.530924  109098 kubeadm.go:883] updating cluster {Name:addons-281483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-281483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:56:18.531052  109098 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:56:18.531102  109098 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:56:18.566606  109098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 08:56:18.566717  109098 ssh_runner.go:195] Run: which lz4
	I1018 08:56:18.571118  109098 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 08:56:18.576083  109098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 08:56:18.576172  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1018 08:56:20.045175  109098 crio.go:462] duration metric: took 1.474087762s to copy over tarball
	I1018 08:56:20.045265  109098 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 08:56:21.700596  109098 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.65529131s)
	I1018 08:56:21.700635  109098 crio.go:469] duration metric: took 1.655427875s to extract the tarball
	I1018 08:56:21.700645  109098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 08:56:21.742360  109098 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:56:21.792978  109098 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:56:21.793007  109098 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:56:21.793018  109098 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.34.1 crio true true} ...
	I1018 08:56:21.793151  109098 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-281483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-281483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:56:21.793237  109098 ssh_runner.go:195] Run: crio config
	I1018 08:56:21.837907  109098 cni.go:84] Creating CNI manager for ""
	I1018 08:56:21.837932  109098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:56:21.837951  109098 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:56:21.837984  109098 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-281483 NodeName:addons-281483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:56:21.838121  109098 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-281483"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.144"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:56:21.838207  109098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:56:21.850356  109098 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:56:21.850448  109098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:56:21.861437  109098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1018 08:56:21.884712  109098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:56:21.907372  109098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1018 08:56:21.929996  109098 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I1018 08:56:21.934471  109098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:56:21.950003  109098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:56:22.096556  109098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:56:22.128232  109098 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483 for IP: 192.168.39.144
	I1018 08:56:22.128264  109098 certs.go:195] generating shared ca certs ...
	I1018 08:56:22.128288  109098 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.128494  109098 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 08:56:22.234508  109098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt ...
	I1018 08:56:22.234539  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt: {Name:mkebd71f72cc8bf135a21a3e4502ff8e899c029a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.234741  109098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key ...
	I1018 08:56:22.234761  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key: {Name:mk816a9f5f92fdfa58c910f59af58bfdfd4e19d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.234867  109098 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 08:56:22.307562  109098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt ...
	I1018 08:56:22.307595  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt: {Name:mk773baeb024e8da336cb893e40dad15ece3d9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.307793  109098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key ...
	I1018 08:56:22.307809  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key: {Name:mk1a542feb7afadc4359d6d11f02c50fbbe56431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.307918  109098 certs.go:257] generating profile certs ...
	I1018 08:56:22.307995  109098 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.key
	I1018 08:56:22.308023  109098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt with IP's: []
	I1018 08:56:22.471630  109098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt ...
	I1018 08:56:22.471660  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: {Name:mk3ba03d2a39d67763b1511c4f8b5ea36101499a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.471861  109098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.key ...
	I1018 08:56:22.471879  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.key: {Name:mka50679fe9f87ea91c03043163d355758299589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.471990  109098 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key.49772443
	I1018 08:56:22.472019  109098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt.49772443 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I1018 08:56:22.930027  109098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt.49772443 ...
	I1018 08:56:22.930060  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt.49772443: {Name:mkb75f399235903a22fb2537a1b3608b47f3593a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.930291  109098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key.49772443 ...
	I1018 08:56:22.930313  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key.49772443: {Name:mk0b2c3c686aab24d476c0ccc2e2a1be3d57159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:22.930425  109098 certs.go:382] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt.49772443 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt
	I1018 08:56:22.930539  109098 certs.go:386] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key.49772443 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key
	I1018 08:56:22.930625  109098 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.key
	I1018 08:56:22.930653  109098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.crt with IP's: []
	I1018 08:56:23.152436  109098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.crt ...
	I1018 08:56:23.152471  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.crt: {Name:mkf7b80ce5b1b9ca34106ec1c0a53ee751799410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:23.152705  109098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.key ...
	I1018 08:56:23.152729  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.key: {Name:mk9911bcebd5c93aa2a37ef451ffeff0be4df5b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:23.152967  109098 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 08:56:23.153006  109098 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 08:56:23.153035  109098 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:56:23.153068  109098 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 08:56:23.153735  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:56:23.184412  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 08:56:23.214617  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:56:23.244360  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:56:23.273710  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:56:23.303001  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 08:56:23.333695  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:56:23.365439  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 08:56:23.398604  109098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:56:23.429870  109098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:56:23.452518  109098 ssh_runner.go:195] Run: openssl version
	I1018 08:56:23.459650  109098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:56:23.473462  109098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:56:23.479187  109098 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:56:23.479249  109098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:56:23.486647  109098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:56:23.500674  109098 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:56:23.506068  109098 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:56:23.506130  109098 kubeadm.go:400] StartCluster: {Name:addons-281483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-281483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:56:23.506241  109098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:56:23.506296  109098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:56:23.547670  109098 cri.go:89] found id: ""
	I1018 08:56:23.547758  109098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:56:23.560838  109098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:56:23.573126  109098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:56:23.587336  109098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:56:23.587364  109098 kubeadm.go:157] found existing configuration files:
	
	I1018 08:56:23.587423  109098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:56:23.599379  109098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:56:23.599456  109098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:56:23.611584  109098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:56:23.622595  109098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:56:23.622685  109098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:56:23.634842  109098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:56:23.645631  109098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:56:23.645705  109098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:56:23.657502  109098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:56:23.668840  109098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:56:23.668935  109098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:56:23.681940  109098 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 08:56:23.735461  109098 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:56:23.735557  109098 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:56:23.838212  109098 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:56:23.838360  109098 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:56:23.838507  109098 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:56:23.849804  109098 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:56:23.973049  109098 out.go:252]   - Generating certificates and keys ...
	I1018 08:56:23.973198  109098 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:56:23.973317  109098 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:56:24.721028  109098 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:56:25.016875  109098 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:56:25.246990  109098 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:56:25.424994  109098 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:56:25.620284  109098 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:56:25.620450  109098 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-281483 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I1018 08:56:25.870048  109098 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:56:25.870206  109098 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-281483 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I1018 08:56:26.210841  109098 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:56:26.232106  109098 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:56:26.514587  109098 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:56:26.514688  109098 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:56:26.960738  109098 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:56:27.181414  109098 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:56:27.702509  109098 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:56:28.251905  109098 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:56:28.666099  109098 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:56:28.668673  109098 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:56:28.670772  109098 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:56:28.672585  109098 out.go:252]   - Booting up control plane ...
	I1018 08:56:28.672702  109098 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:56:28.672806  109098 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:56:28.673216  109098 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:56:28.690441  109098 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:56:28.690598  109098 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:56:28.697598  109098 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:56:28.697934  109098 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:56:28.698007  109098 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:56:28.866110  109098 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:56:28.866260  109098 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:56:30.365910  109098 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501347755s
	I1018 08:56:30.370963  109098 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:56:30.371087  109098 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.144:8443/livez
	I1018 08:56:30.371423  109098 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:56:30.371539  109098 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:56:33.602128  109098 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.233006496s
	I1018 08:56:34.330221  109098 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.961749752s
	I1018 08:56:36.367944  109098 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001313655s
	I1018 08:56:36.381095  109098 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:56:36.397811  109098 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:56:36.411197  109098 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:56:36.411400  109098 kubeadm.go:318] [mark-control-plane] Marking the node addons-281483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:56:36.423362  109098 kubeadm.go:318] [bootstrap-token] Using token: mnl3fg.md044yn29wzb9oh1
	I1018 08:56:36.424985  109098 out.go:252]   - Configuring RBAC rules ...
	I1018 08:56:36.425159  109098 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:56:36.435303  109098 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:56:36.444496  109098 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:56:36.449164  109098 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:56:36.453150  109098 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:56:36.459748  109098 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:56:36.775923  109098 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:56:37.219619  109098 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:56:37.776466  109098 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:56:37.777347  109098 kubeadm.go:318] 
	I1018 08:56:37.777458  109098 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:56:37.777483  109098 kubeadm.go:318] 
	I1018 08:56:37.777571  109098 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:56:37.777596  109098 kubeadm.go:318] 
	I1018 08:56:37.777626  109098 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:56:37.777700  109098 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:56:37.777767  109098 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:56:37.777776  109098 kubeadm.go:318] 
	I1018 08:56:37.777845  109098 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:56:37.777870  109098 kubeadm.go:318] 
	I1018 08:56:37.777952  109098 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:56:37.777962  109098 kubeadm.go:318] 
	I1018 08:56:37.778044  109098 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:56:37.778165  109098 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:56:37.778268  109098 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:56:37.778278  109098 kubeadm.go:318] 
	I1018 08:56:37.778403  109098 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:56:37.778504  109098 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:56:37.778512  109098 kubeadm.go:318] 
	I1018 08:56:37.778577  109098 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mnl3fg.md044yn29wzb9oh1 \
	I1018 08:56:37.778670  109098 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 \
	I1018 08:56:37.778690  109098 kubeadm.go:318] 	--control-plane 
	I1018 08:56:37.778694  109098 kubeadm.go:318] 
	I1018 08:56:37.778790  109098 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:56:37.778801  109098 kubeadm.go:318] 
	I1018 08:56:37.778905  109098 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mnl3fg.md044yn29wzb9oh1 \
	I1018 08:56:37.778994  109098 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 
	I1018 08:56:37.780591  109098 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:56:37.780647  109098 cni.go:84] Creating CNI manager for ""
	I1018 08:56:37.780668  109098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:56:37.782481  109098 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 08:56:37.783842  109098 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 08:56:37.797134  109098 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 08:56:37.825503  109098 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:56:37.825619  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:37.825658  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-281483 minikube.k8s.io/updated_at=2025_10_18T08_56_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=addons-281483 minikube.k8s.io/primary=true
	I1018 08:56:37.866065  109098 ops.go:34] apiserver oom_adj: -16
	I1018 08:56:37.992086  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:38.493019  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:38.992802  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:39.492339  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:39.992917  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:40.492196  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:40.992454  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:41.492244  109098 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:56:41.587181  109098 kubeadm.go:1113] duration metric: took 3.761646492s to wait for elevateKubeSystemPrivileges
	I1018 08:56:41.587219  109098 kubeadm.go:402] duration metric: took 18.081094698s to StartCluster
	I1018 08:56:41.587240  109098 settings.go:142] acquiring lock: {Name:mk3a2bfd7987fbaaa6a53ab72c677b4cd8c4a8ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:41.587362  109098 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 08:56:41.587743  109098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/kubeconfig: {Name:mk43b332619cb442c058a4739a3d7e69542c9a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:56:41.588409  109098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:56:41.588454  109098 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:56:41.588491  109098 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:56:41.588634  109098 addons.go:69] Setting yakd=true in profile "addons-281483"
	I1018 08:56:41.588646  109098 config.go:182] Loaded profile config "addons-281483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:56:41.588663  109098 addons.go:238] Setting addon yakd=true in "addons-281483"
	I1018 08:56:41.588652  109098 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-281483"
	I1018 08:56:41.588684  109098 addons.go:69] Setting gcp-auth=true in profile "addons-281483"
	I1018 08:56:41.588701  109098 addons.go:69] Setting volumesnapshots=true in profile "addons-281483"
	I1018 08:56:41.588704  109098 addons.go:69] Setting registry=true in profile "addons-281483"
	I1018 08:56:41.588714  109098 addons.go:238] Setting addon volumesnapshots=true in "addons-281483"
	I1018 08:56:41.588716  109098 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-281483"
	I1018 08:56:41.588721  109098 mustload.go:65] Loading cluster: addons-281483
	I1018 08:56:41.588725  109098 addons.go:238] Setting addon registry=true in "addons-281483"
	I1018 08:56:41.588730  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588730  109098 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-281483"
	I1018 08:56:41.588735  109098 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-281483"
	I1018 08:56:41.588757  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588758  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588768  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588688  109098 addons.go:69] Setting storage-provisioner=true in profile "addons-281483"
	I1018 08:56:41.588663  109098 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-281483"
	I1018 08:56:41.589218  109098 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-281483"
	I1018 08:56:41.589254  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.589378  109098 addons.go:69] Setting ingress=true in profile "addons-281483"
	I1018 08:56:41.589401  109098 addons.go:238] Setting addon ingress=true in "addons-281483"
	I1018 08:56:41.589443  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588706  109098 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-281483"
	I1018 08:56:41.589587  109098 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-281483"
	I1018 08:56:41.589705  109098 addons.go:238] Setting addon storage-provisioner=true in "addons-281483"
	I1018 08:56:41.589730  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.589782  109098 addons.go:69] Setting inspektor-gadget=true in profile "addons-281483"
	I1018 08:56:41.589815  109098 addons.go:238] Setting addon inspektor-gadget=true in "addons-281483"
	I1018 08:56:41.589857  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.590006  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.590021  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.590041  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.590066  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.589769  109098 addons.go:69] Setting ingress-dns=true in profile "addons-281483"
	I1018 08:56:41.590296  109098 out.go:179] * Verifying Kubernetes components...
	I1018 08:56:41.590361  109098 addons.go:69] Setting metrics-server=true in profile "addons-281483"
	I1018 08:56:41.590373  109098 addons.go:238] Setting addon metrics-server=true in "addons-281483"
	I1018 08:56:41.588697  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.588697  109098 addons.go:69] Setting volcano=true in profile "addons-281483"
	I1018 08:56:41.590461  109098 addons.go:238] Setting addon volcano=true in "addons-281483"
	I1018 08:56:41.588678  109098 addons.go:69] Setting default-storageclass=true in profile "addons-281483"
	I1018 08:56:41.588716  109098 addons.go:69] Setting registry-creds=true in profile "addons-281483"
	I1018 08:56:41.590490  109098 addons.go:69] Setting cloud-spanner=true in profile "addons-281483"
	I1018 08:56:41.590500  109098 addons.go:238] Setting addon cloud-spanner=true in "addons-281483"
	I1018 08:56:41.590537  109098 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-281483"
	I1018 08:56:41.590585  109098 config.go:182] Loaded profile config "addons-281483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:56:41.590597  109098 addons.go:238] Setting addon registry-creds=true in "addons-281483"
	I1018 08:56:41.590678  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.590276  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.590723  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.590800  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.590838  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.590894  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.590918  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.590994  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591023  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.591000  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591070  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.591128  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591153  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.590306  109098 addons.go:238] Setting addon ingress-dns=true in "addons-281483"
	I1018 08:56:41.591208  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591224  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.591288  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.591333  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591356  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.591766  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.591785  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.591821  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.592228  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.592272  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.592441  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.592732  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.592760  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.592802  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.593768  109098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:56:41.597706  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.597786  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.598370  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.598423  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.604989  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.605068  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.606560  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.606620  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.621805  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1018 08:56:41.625736  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.625845  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I1018 08:56:41.626701  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.627509  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.627600  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.628092  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.629048  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I1018 08:56:41.629450  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.629541  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.629383  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I1018 08:56:41.630565  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.631359  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.631380  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.631923  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.631938  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.632188  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.632293  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.632974  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.633002  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.633307  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.634078  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.634111  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.634495  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.634510  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.635112  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.635915  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.635971  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.638639  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1018 08:56:41.639542  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.640364  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.640441  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.641163  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.642188  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.642323  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.647263  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1018 08:56:41.647505  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I1018 08:56:41.647638  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I1018 08:56:41.647756  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I1018 08:56:41.649465  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.649542  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I1018 08:56:41.649567  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I1018 08:56:41.650036  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.650061  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.650251  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.650687  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.651470  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.651488  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.651618  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.651630  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.652718  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.652822  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.652862  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.652947  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.653213  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1018 08:56:41.653380  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I1018 08:56:41.654172  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.654186  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.654360  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.654887  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.654925  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.655023  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.655063  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.655537  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.655904  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.655928  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.656017  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.656092  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.656569  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.656838  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.656859  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.657080  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.657128  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.657268  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.657316  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.657428  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.657537  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.657734  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.658101  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.658126  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.658000  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.658205  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.661275  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.661356  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.661560  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.662981  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.662982  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.663042  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.665879  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.667437  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I1018 08:56:41.667550  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.667633  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I1018 08:56:41.667653  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I1018 08:56:41.670685  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.670747  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.671364  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.671613  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.673828  109098 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-281483"
	I1018 08:56:41.673874  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.674251  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.674293  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.680042  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I1018 08:56:41.680243  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I1018 08:56:41.680368  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.680573  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.680587  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.680723  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.680736  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.684386  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I1018 08:56:41.684587  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.684763  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.684777  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.684844  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.687030  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.687122  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.687247  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.687333  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.687806  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.687844  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.688073  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.688234  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.688247  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.688372  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.688383  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.688628  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.688780  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.689192  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.689237  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.689457  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.689513  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.689538  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.690310  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.690333  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.690914  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.691225  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.691646  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.693552  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I1018 08:56:41.694011  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.694515  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.694633  109098 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:56:41.694690  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I1018 08:56:41.695436  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43363
	I1018 08:56:41.695614  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:56:41.695624  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1018 08:56:41.695699  109098 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:56:41.695711  109098 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:56:41.695733  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.696206  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.696828  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.696857  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.697171  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:56:41.697191  109098 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:56:41.697213  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.697754  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.697923  109098 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:56:41.698035  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.698710  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.698832  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.698855  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.699318  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.699561  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.701814  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.701943  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I1018 08:56:41.702020  109098 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:56:41.702157  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.704538  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I1018 08:56:41.704558  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.704538  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I1018 08:56:41.704674  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.704686  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.704757  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.704763  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.704778  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.704834  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.704860  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I1018 08:56:41.704932  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:41.704942  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:41.705126  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:41.705180  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:41.705193  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:41.705200  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:41.705527  109098 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:56:41.705728  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.705818  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.705899  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.706047  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.706166  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41337
	I1018 08:56:41.705972  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.706331  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.705995  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:41.706382  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	W1018 08:56:41.706492  109098 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:56:41.707024  109098 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:56:41.707076  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:56:41.707100  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.706014  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:41.707166  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.707302  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.707342  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.707371  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.707345  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.707467  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.707496  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I1018 08:56:41.707674  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.707740  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.707755  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.707108  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.707133  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.708154  109098 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:56:41.708208  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.708244  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.708257  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.708468  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.708568  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.708948  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.709177  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.709257  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.709336  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.709385  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.709447  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.709471  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.709515  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.709532  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.710008  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.710213  109098 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:56:41.710236  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:56:41.710255  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.710315  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.710726  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.711628  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.711783  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.714427  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.714504  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.714521  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.714557  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.714994  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.715015  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.715258  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.715547  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.715799  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.716053  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.717415  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.717525  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.718038  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.718211  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.718348  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.718759  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.719484  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.719837  109098 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:56:41.720902  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.721064  109098 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:56:41.721129  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.721210  109098 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:56:41.721267  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:56:41.721303  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.721387  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.721990  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.722061  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.722371  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.722883  109098 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:56:41.723170  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:56:41.723192  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.723541  109098 addons.go:238] Setting addon default-storageclass=true in "addons-281483"
	I1018 08:56:41.723579  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:41.723966  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.724016  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.725171  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.725859  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I1018 08:56:41.726149  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I1018 08:56:41.726754  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.726906  109098 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:56:41.726982  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.727004  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.727596  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.727771  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.727939  109098 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:56:41.728068  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.728082  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.728642  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.728662  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.728680  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.728753  109098 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:56:41.728779  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:56:41.728797  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.729243  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.729723  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.730014  109098 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:56:41.730029  109098 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:56:41.730046  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.730707  109098 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:56:41.730884  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.731134  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.731426  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.732157  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.732232  109098 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:56:41.732242  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:56:41.732344  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.732796  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.732849  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I1018 08:56:41.733889  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.734167  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.734456  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.735361  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.735379  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.735361  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I1018 08:56:41.735777  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.735797  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.735858  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.735875  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.736130  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.736398  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.736501  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.736577  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.736915  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.736975  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.737503  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.737537  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.737643  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.737659  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.738073  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.738201  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.738486  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.738831  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.738835  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.739411  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.741688  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.742036  109098 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:56:41.742358  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.743396  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.742770  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.743426  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.743486  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.743520  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.743573  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.743655  109098 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:56:41.743674  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:56:41.743695  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.743770  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.744028  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.744055  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.744263  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.744421  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I1018 08:56:41.744546  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.744555  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.744575  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.744624  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.744666  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.744818  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.745009  109098 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:56:41.745053  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.745221  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.745476  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.745540  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.745643  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.746153  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.746283  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.746308  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.746508  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.746753  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.746970  109098 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:56:41.746989  109098 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:56:41.747021  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.746974  109098 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:56:41.747125  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.747342  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:56:41.748683  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.749549  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.749577  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.749813  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.750056  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.750071  109098 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:56:41.750084  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:56:41.750371  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.750419  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.750564  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.751321  109098 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:56:41.751346  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:56:41.751365  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.751874  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I1018 08:56:41.752445  109098 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:56:41.752700  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.753289  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.753312  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.753323  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.753436  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:56:41.753722  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.753795  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.753959  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.754225  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.754479  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.754487  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:41.754836  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:41.754921  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.755164  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.756243  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.756806  109098 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:56:41.756854  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.757174  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.757245  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.757445  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.757693  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:56:41.757703  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.757891  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.758973  109098 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:56:41.758992  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:56:41.759012  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.761091  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:56:41.762365  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:56:41.762685  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.763168  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.763206  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.763414  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.763604  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.763771  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.763897  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.764814  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:56:41.766172  109098 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:56:41.768506  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:56:41.768532  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:56:41.768557  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.770243  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I1018 08:56:41.770749  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:41.771217  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:41.771238  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:41.771644  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:41.771851  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:41.773407  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.773842  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:41.774004  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.774033  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.774063  109098 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:56:41.774076  109098 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:56:41.774101  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:41.774280  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.774464  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.774608  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.774831  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:41.778037  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.778544  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:41.778561  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:41.778800  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:41.778993  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:41.779147  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:41.779288  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	W1018 08:56:41.901178  109098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60442->192.168.39.144:22: read: connection reset by peer
	I1018 08:56:41.901234  109098 retry.go:31] will retry after 207.733561ms: ssh: handshake failed: read tcp 192.168.39.1:60442->192.168.39.144:22: read: connection reset by peer
	W1018 08:56:41.916963  109098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60464->192.168.39.144:22: read: connection reset by peer
	I1018 08:56:41.916994  109098 retry.go:31] will retry after 366.107143ms: ssh: handshake failed: read tcp 192.168.39.1:60464->192.168.39.144:22: read: connection reset by peer
	W1018 08:56:41.917060  109098 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60470->192.168.39.144:22: read: connection reset by peer
	I1018 08:56:41.917070  109098 retry.go:31] will retry after 363.029302ms: ssh: handshake failed: read tcp 192.168.39.1:60470->192.168.39.144:22: read: connection reset by peer
	I1018 08:56:42.025816  109098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:56:42.025910  109098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:56:42.404730  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:56:42.405830  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:56:42.428196  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:56:42.444194  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:56:42.459920  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:56:42.463420  109098 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:56:42.463452  109098 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:56:42.463655  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:56:42.464947  109098 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:56:42.464966  109098 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:56:42.476299  109098 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:42.476328  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:56:42.476299  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:56:42.498847  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:56:42.560245  109098 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:56:42.560277  109098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:56:42.956271  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:43.018514  109098 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:56:43.018549  109098 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:56:43.055193  109098 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:56:43.055221  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:56:43.129639  109098 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:56:43.129665  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:56:43.277176  109098 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:56:43.277205  109098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:56:43.303134  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:56:43.303178  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:56:43.303798  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:56:43.640030  109098 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:56:43.640068  109098 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:56:43.742557  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:56:43.828129  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:56:43.828170  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:56:43.926321  109098 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:56:43.926351  109098 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:56:44.037627  109098 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:56:44.037662  109098 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:56:44.425291  109098 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:56:44.425325  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:56:44.549261  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:56:44.549296  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:56:44.552118  109098 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:56:44.552154  109098 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:56:44.652982  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:56:44.653012  109098 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:56:44.802097  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:56:44.853915  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:56:44.853949  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:56:45.002659  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:56:45.113847  109098 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:56:45.113874  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:56:45.232949  109098 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:56:45.232979  109098 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:56:45.483206  109098 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.457247894s)
	I1018 08:56:45.483252  109098 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1018 08:56:45.483269  109098 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.457416163s)
	I1018 08:56:45.484012  109098 node_ready.go:35] waiting up to 6m0s for node "addons-281483" to be "Ready" ...
	I1018 08:56:45.489387  109098 node_ready.go:49] node "addons-281483" is "Ready"
	I1018 08:56:45.489423  109098 node_ready.go:38] duration metric: took 5.356277ms for node "addons-281483" to be "Ready" ...
	I1018 08:56:45.489442  109098 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:56:45.489500  109098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:56:45.554535  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:56:45.878876  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:56:45.878916  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:56:45.991343  109098 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-281483" context rescaled to 1 replicas
	I1018 08:56:46.163898  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:56:46.163928  109098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:56:46.518384  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:56:46.518408  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:56:46.697806  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:56:46.697839  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:56:46.970175  109098 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:56:46.970218  109098 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:56:47.536348  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:56:49.164177  109098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:56:49.164232  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:49.168176  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:49.168783  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:49.168833  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:49.169065  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:49.169322  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:49.169558  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:49.169805  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:49.667924  109098 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:56:49.861650  109098 addons.go:238] Setting addon gcp-auth=true in "addons-281483"
	I1018 08:56:49.861713  109098 host.go:66] Checking if "addons-281483" exists ...
	I1018 08:56:49.862084  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:49.862127  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:49.877356  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I1018 08:56:49.877965  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:49.878540  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:49.878566  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:49.879030  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:49.879658  109098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 08:56:49.879697  109098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 08:56:49.894395  109098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I1018 08:56:49.894956  109098 main.go:141] libmachine: () Calling .GetVersion
	I1018 08:56:49.895502  109098 main.go:141] libmachine: Using API Version  1
	I1018 08:56:49.895529  109098 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 08:56:49.895891  109098 main.go:141] libmachine: () Calling .GetMachineName
	I1018 08:56:49.896151  109098 main.go:141] libmachine: (addons-281483) Calling .GetState
	I1018 08:56:49.897959  109098 main.go:141] libmachine: (addons-281483) Calling .DriverName
	I1018 08:56:49.898204  109098 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:56:49.898229  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHHostname
	I1018 08:56:49.901550  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:49.902100  109098 main.go:141] libmachine: (addons-281483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:78:29", ip: ""} in network mk-addons-281483: {Iface:virbr1 ExpiryTime:2025-10-18 09:56:14 +0000 UTC Type:0 Mac:52:54:00:4f:78:29 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:addons-281483 Clientid:01:52:54:00:4f:78:29}
	I1018 08:56:49.902127  109098 main.go:141] libmachine: (addons-281483) DBG | domain addons-281483 has defined IP address 192.168.39.144 and MAC address 52:54:00:4f:78:29 in network mk-addons-281483
	I1018 08:56:49.902357  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHPort
	I1018 08:56:49.902635  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHKeyPath
	I1018 08:56:49.902819  109098 main.go:141] libmachine: (addons-281483) Calling .GetSSHUsername
	I1018 08:56:49.903035  109098 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/addons-281483/id_rsa Username:docker}
	I1018 08:56:51.172660  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.767875052s)
	I1018 08:56:51.172692  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.766830832s)
	I1018 08:56:51.172714  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.744490974s)
	I1018 08:56:51.172729  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.172737  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.172741  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.172746  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.172794  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.728561064s)
	I1018 08:56:51.172831  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.172840  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.71289629s)
	I1018 08:56:51.172849  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.172856  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.172865  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.172729  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.172928  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.172987  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.709309001s)
	I1018 08:56:51.173013  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173022  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.173241  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.696882412s)
	I1018 08:56:51.173270  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173278  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.173343  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.674464248s)
	I1018 08:56:51.173359  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173365  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.173467  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.217166995s)
	W1018 08:56:51.173493  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:51.173513  109098 retry.go:31] will retry after 144.760032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:51.173577  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.869760099s)
	I1018 08:56:51.173596  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173605  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.173699  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.431081917s)
	I1018 08:56:51.173740  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.371613233s)
	I1018 08:56:51.173763  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173773  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.173773  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.173786  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.174001  109098 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.68448191s)
	I1018 08:56:51.174027  109098 api_server.go:72] duration metric: took 9.585543794s to wait for apiserver process to appear ...
	I1018 08:56:51.174035  109098 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:56:51.174052  109098 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I1018 08:56:51.174205  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.171507746s)
	I1018 08:56:51.174233  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.174243  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175380  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175397  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175410  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175418  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175473  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175493  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175499  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175507  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175514  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175553  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175572  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175581  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175588  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175606  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175644  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175663  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175669  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175676  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175682  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175720  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175720  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175739  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175746  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175753  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175761  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175769  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175768  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175776  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175784  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175804  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175810  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175753  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175825  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175834  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175841  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175817  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175857  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175864  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175882  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175848  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175897  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175902  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.175889  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175932  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175935  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.175938  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.175941  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.175948  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.175954  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.176205  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.176240  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.176248  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176256  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.176263  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.176484  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.176498  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.176499  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.176509  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176517  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.176524  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176861  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.176900  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.176909  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176921  109098 addons.go:479] Verifying addon ingress=true in "addons-281483"
	I1018 08:56:51.177011  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177016  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177042  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177043  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177024  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176573  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177116  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177160  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.177173  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.177241  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177262  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177269  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177325  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177364  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177376  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177418  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177679  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177794  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.177809  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177817  109098 addons.go:479] Verifying addon registry=true in "addons-281483"
	I1018 08:56:51.177688  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.179460  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.179494  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.179501  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.176979  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.176601  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.177874  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177749  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.180514  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.180524  109098 addons.go:479] Verifying addon metrics-server=true in "addons-281483"
	I1018 08:56:51.177713  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.180540  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.177727  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.181315  109098 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-281483 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:56:51.182629  109098 out.go:179] * Verifying registry addon...
	I1018 08:56:51.182657  109098 out.go:179] * Verifying ingress addon...
	I1018 08:56:51.185226  109098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:56:51.185226  109098 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:56:51.318847  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:51.382219  109098 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I1018 08:56:51.423843  109098 api_server.go:141] control plane version: v1.34.1
	I1018 08:56:51.423868  109098 api_server.go:131] duration metric: took 249.828598ms to wait for apiserver health ...
	I1018 08:56:51.423878  109098 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:56:51.433861  109098 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:56:51.433881  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:51.434343  109098 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:56:51.434355  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:51.560373  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.560400  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.560686  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.560703  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.560741  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	W1018 08:56:51.560810  109098 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 08:56:51.589987  109098 system_pods.go:59] 15 kube-system pods found
	I1018 08:56:51.590065  109098 system_pods.go:61] "amd-gpu-device-plugin-6ms88" [9d5daeed-6150-4bb9-89a0-3cf2f1273a9c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:56:51.590085  109098 system_pods.go:61] "coredns-66bc5c9577-cjb55" [5fa37c00-217c-4278-9ba2-385e4a772820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:56:51.590097  109098 system_pods.go:61] "coredns-66bc5c9577-mcrjx" [d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb] Running
	I1018 08:56:51.590105  109098 system_pods.go:61] "etcd-addons-281483" [2c845152-cf47-4e61-86e7-fce75c278f9e] Running
	I1018 08:56:51.590112  109098 system_pods.go:61] "kube-apiserver-addons-281483" [d6eae090-7a68-4b53-aefb-5b76bb6eb81e] Running
	I1018 08:56:51.590123  109098 system_pods.go:61] "kube-controller-manager-addons-281483" [4713a454-b895-4e9e-82a0-01eab20470c4] Running
	I1018 08:56:51.590134  109098 system_pods.go:61] "kube-ingress-dns-minikube" [10a40e0a-1ad2-40ed-a7cb-1406b79007c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:56:51.590169  109098 system_pods.go:61] "kube-proxy-m697j" [926c9399-fb0c-48b4-bd10-31524d6804a7] Running
	I1018 08:56:51.590175  109098 system_pods.go:61] "kube-scheduler-addons-281483" [1447e579-310d-403a-ba8b-31ee5f7eb359] Running
	I1018 08:56:51.590182  109098 system_pods.go:61] "metrics-server-85b7d694d7-4bbzn" [a841e975-54e5-458f-aead-2b0ca7cee2c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:56:51.590191  109098 system_pods.go:61] "nvidia-device-plugin-daemonset-mhqn2" [9ffae91b-e17e-4dab-89bd-05ac9e5967b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:56:51.590202  109098 system_pods.go:61] "registry-6b586f9694-z2m56" [3d215353-695f-4b94-af96-f7f4675e103e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:56:51.590214  109098 system_pods.go:61] "registry-creds-764b6fb674-jqt6v" [123e3bf5-e6b7-4903-aa6c-7a4afec09978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:56:51.590224  109098 system_pods.go:61] "registry-proxy-h9ssw" [26352e63-4436-4855-b2e8-f4819ae96865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:56:51.590236  109098 system_pods.go:61] "storage-provisioner" [41059af3-156f-4248-a3c5-b068a6d1c84e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:56:51.590246  109098 system_pods.go:74] duration metric: took 166.359947ms to wait for pod list to return data ...
	I1018 08:56:51.590262  109098 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:56:51.638012  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:51.638039  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:51.638420  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:51.638438  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:51.638450  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:51.665485  109098 default_sa.go:45] found service account: "default"
	I1018 08:56:51.665516  109098 default_sa.go:55] duration metric: took 75.244712ms for default service account to be created ...
	I1018 08:56:51.665528  109098 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:56:51.699714  109098 system_pods.go:86] 15 kube-system pods found
	I1018 08:56:51.699756  109098 system_pods.go:89] "amd-gpu-device-plugin-6ms88" [9d5daeed-6150-4bb9-89a0-3cf2f1273a9c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:56:51.699776  109098 system_pods.go:89] "coredns-66bc5c9577-cjb55" [5fa37c00-217c-4278-9ba2-385e4a772820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:56:51.699788  109098 system_pods.go:89] "coredns-66bc5c9577-mcrjx" [d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb] Running
	I1018 08:56:51.699796  109098 system_pods.go:89] "etcd-addons-281483" [2c845152-cf47-4e61-86e7-fce75c278f9e] Running
	I1018 08:56:51.699801  109098 system_pods.go:89] "kube-apiserver-addons-281483" [d6eae090-7a68-4b53-aefb-5b76bb6eb81e] Running
	I1018 08:56:51.699806  109098 system_pods.go:89] "kube-controller-manager-addons-281483" [4713a454-b895-4e9e-82a0-01eab20470c4] Running
	I1018 08:56:51.699815  109098 system_pods.go:89] "kube-ingress-dns-minikube" [10a40e0a-1ad2-40ed-a7cb-1406b79007c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:56:51.699819  109098 system_pods.go:89] "kube-proxy-m697j" [926c9399-fb0c-48b4-bd10-31524d6804a7] Running
	I1018 08:56:51.699825  109098 system_pods.go:89] "kube-scheduler-addons-281483" [1447e579-310d-403a-ba8b-31ee5f7eb359] Running
	I1018 08:56:51.699837  109098 system_pods.go:89] "metrics-server-85b7d694d7-4bbzn" [a841e975-54e5-458f-aead-2b0ca7cee2c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:56:51.699845  109098 system_pods.go:89] "nvidia-device-plugin-daemonset-mhqn2" [9ffae91b-e17e-4dab-89bd-05ac9e5967b4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:56:51.699860  109098 system_pods.go:89] "registry-6b586f9694-z2m56" [3d215353-695f-4b94-af96-f7f4675e103e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:56:51.699869  109098 system_pods.go:89] "registry-creds-764b6fb674-jqt6v" [123e3bf5-e6b7-4903-aa6c-7a4afec09978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:56:51.699878  109098 system_pods.go:89] "registry-proxy-h9ssw" [26352e63-4436-4855-b2e8-f4819ae96865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:56:51.699885  109098 system_pods.go:89] "storage-provisioner" [41059af3-156f-4248-a3c5-b068a6d1c84e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:56:51.699899  109098 system_pods.go:126] duration metric: took 34.3633ms to wait for k8s-apps to be running ...
	I1018 08:56:51.699914  109098 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:56:51.699975  109098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:56:51.715776  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:51.717917  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:51.940886  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.386268206s)
	W1018 08:56:51.940987  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:56:51.941024  109098 retry.go:31] will retry after 308.223192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:56:52.227649  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:52.227931  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:52.249597  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:56:52.721909  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:52.724648  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:52.997248  109098 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.099009771s)
	I1018 08:56:52.997254  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.460849819s)
	I1018 08:56:52.997449  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:52.997472  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:52.997798  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:52.997815  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:52.997824  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:52.997832  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:52.998194  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:52.998210  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:52.998221  109098 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-281483"
	I1018 08:56:52.998189  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:52.998784  109098 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:56:52.999892  109098 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:56:53.001629  109098 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:56:53.002256  109098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:56:53.002922  109098 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:56:53.002940  109098 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:56:53.049861  109098 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:56:53.049896  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:53.215036  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:53.216945  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:53.229244  109098 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:56:53.229266  109098 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:56:53.333729  109098 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:56:53.333754  109098 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:56:53.481355  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:56:53.506869  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:53.697581  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:53.698264  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:54.007252  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:54.198957  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:54.199177  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:54.216531  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.8976449s)
	W1018 08:56:54.216576  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:54.216585  109098 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.516579341s)
	I1018 08:56:54.216607  109098 retry.go:31] will retry after 477.542687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:54.216618  109098 system_svc.go:56] duration metric: took 2.516699564s WaitForService to wait for kubelet
	I1018 08:56:54.216632  109098 kubeadm.go:586] duration metric: took 12.62814692s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:56:54.216661  109098 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:56:54.224915  109098 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 08:56:54.224956  109098 node_conditions.go:123] node cpu capacity is 2
	I1018 08:56:54.224974  109098 node_conditions.go:105] duration metric: took 8.305885ms to run NodePressure ...
	I1018 08:56:54.224991  109098 start.go:241] waiting for startup goroutines ...
	I1018 08:56:54.507165  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:54.694286  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:54.700868  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:54.701278  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:55.033276  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:55.166876  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.917220975s)
	I1018 08:56:55.166962  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:55.166988  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:55.167301  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:55.167366  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:55.167388  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:55.167401  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:55.167410  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:55.167632  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:55.167651  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:55.167672  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:55.224433  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:55.226310  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:55.380185  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.898761246s)
	I1018 08:56:55.380258  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:55.380283  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:55.380577  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:55.380636  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:55.380647  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:55.380661  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:56:55.380672  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:56:55.381022  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:56:55.381045  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:56:55.381053  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:56:55.382170  109098 addons.go:479] Verifying addon gcp-auth=true in "addons-281483"
	I1018 08:56:55.384724  109098 out.go:179] * Verifying gcp-auth addon...
	I1018 08:56:55.387083  109098 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:56:55.424114  109098 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:56:55.424134  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:55.510627  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:55.693967  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:55.694111  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:55.893995  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:56.009665  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:56.191710  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:56.192167  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:56.401068  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:56.508267  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:56.665678  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.971350988s)
	W1018 08:56:56.665744  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:56.665771  109098 retry.go:31] will retry after 773.772076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:56.693819  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:56.696008  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:56.893001  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:57.008170  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:57.192751  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:57.192937  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:57.394575  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:57.440771  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:57.509895  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:57.691680  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:57.695231  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:57.890731  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:58.008357  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:58.189339  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:58.192384  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:58.390915  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:58.506373  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:58.693018  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:58.693182  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:58.737520  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.296698618s)
	W1018 08:56:58.737617  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:58.737648  109098 retry.go:31] will retry after 910.482147ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:56:58.891633  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:59.008262  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:59.193629  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:59.198300  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:59.391793  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:56:59.506365  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:56:59.648645  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:56:59.690242  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:56:59.690447  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:56:59.890350  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:00.008034  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:00.191681  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:00.191935  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:00.393250  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:00.508848  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:57:00.632941  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:00.632993  109098 retry.go:31] will retry after 1.081363753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:00.696588  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:00.697440  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:00.894673  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:01.010473  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:01.193570  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:01.194549  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:01.392774  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:01.509730  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:01.693971  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:01.694122  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:01.714951  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:01.890645  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:02.007889  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:02.191964  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:02.193796  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:02.391014  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:02.506244  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:02.691796  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:02.692373  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:02.892324  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:02.960360  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245358937s)
	W1018 08:57:02.960402  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:02.960423  109098 retry.go:31] will retry after 1.145824809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:03.005649  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:03.353719  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:03.355768  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:03.455337  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:03.506788  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:03.689160  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:03.689157  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:03.895618  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:04.006867  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:04.107040  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:04.190789  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:04.192415  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:04.397761  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:04.509378  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:04.690667  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:04.691751  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:04.892353  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:05.009362  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:05.192934  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:05.192974  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:05.238347  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.131263515s)
	W1018 08:57:05.238397  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:05.238424  109098 retry.go:31] will retry after 1.694461921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:05.451804  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:05.509498  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:05.688963  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:05.689130  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:05.892307  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:06.100970  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:06.193542  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:06.195310  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:06.391386  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:06.506726  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:06.689596  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:06.689636  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:06.897878  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:06.933242  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:07.007788  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:07.189803  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:07.191223  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:07.392469  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:08.063699  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:08.063733  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:08.063798  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:08.066428  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:08.158521  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:08.191762  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:08.196519  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:08.402960  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:08.524048  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:08.535958  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.602675626s)
	W1018 08:57:08.536007  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:08.536031  109098 retry.go:31] will retry after 4.601959041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:08.689735  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:08.689836  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:08.892442  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:09.007204  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:09.191619  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:09.191983  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:09.394241  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:09.506665  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:09.693099  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:09.693641  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:09.891219  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:10.009262  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:10.191764  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:10.191770  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:10.390954  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:10.507399  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:10.691000  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:10.692081  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:10.890895  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:11.006692  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:11.188907  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:11.191459  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:11.392680  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:11.506750  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:11.690979  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:11.691676  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:11.890756  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:12.006953  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:12.189754  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:12.190131  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:12.390370  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:12.506677  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:12.689195  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:12.689413  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:12.890899  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:13.008550  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:13.138811  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:13.189223  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:13.189351  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:13.393622  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:13.507109  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:13.691629  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:13.692316  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:13.890862  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:14.006619  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:14.168323  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.029469482s)
	W1018 08:57:14.168374  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:14.168397  109098 retry.go:31] will retry after 4.474693091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:14.190270  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:14.190523  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:14.392940  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:14.506467  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:14.693338  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:14.693678  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:14.892851  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:15.009842  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:15.189252  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:15.191780  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:15.391260  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:15.507538  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:15.692326  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:15.692919  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:15.891325  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:16.077295  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:16.190261  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:16.191123  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:16.390733  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:16.505624  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:16.690580  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:16.690661  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:16.891292  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:17.008036  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:17.190955  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:17.191340  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:17.397035  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:17.508065  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:17.692515  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:17.692880  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:17.890847  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:18.005591  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:18.189803  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:18.189999  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:18.391343  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:18.508351  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:18.643308  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:18.691165  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:18.692013  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:18.894027  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:19.008704  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:19.189689  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:19.193670  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:57:19.366530  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:19.366598  109098 retry.go:31] will retry after 7.939476963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:19.390955  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:19.506930  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:19.691371  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:19.693393  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:19.895357  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:20.006887  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:20.194311  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:20.194626  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:20.394406  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:20.507516  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:20.688313  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:20.692395  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:20.890227  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:21.007161  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:21.195492  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:21.196236  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:21.390281  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:21.505498  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:21.690380  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:21.690390  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:21.891208  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:22.006851  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:22.189520  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:22.190800  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:22.391341  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:22.507168  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:22.701687  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:22.701876  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:22.890592  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:23.007248  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:23.191782  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:23.192631  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:23.392002  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:23.507271  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:23.691486  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:23.691554  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:23.890638  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:24.007551  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:24.192542  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:24.192599  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:24.394133  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:24.509748  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:24.694865  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:24.696643  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:24.891374  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:25.009981  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:25.191426  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:25.194311  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:25.392187  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:25.507341  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:25.691870  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:25.695546  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:25.892820  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:26.307850  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:26.307966  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:26.308075  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:26.392452  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:26.508624  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:26.691537  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:26.691644  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:26.892797  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:27.009841  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:27.189516  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:27.190548  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:27.306655  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:27.392668  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:27.507902  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:27.689774  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:27.693214  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:27.892517  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:28.008207  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:28.190416  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:28.192331  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:28.355787  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.049083421s)
	W1018 08:57:28.355834  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:28.355853  109098 retry.go:31] will retry after 18.574278867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:28.395485  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:28.506418  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:28.688654  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:28.690133  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:28.894098  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:29.008121  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:29.190558  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:29.192756  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:29.391966  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:29.508236  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:29.690758  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:29.692701  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:29.892520  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:30.007409  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:30.190373  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:30.191876  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:30.538511  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:30.540050  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:30.691849  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:30.691933  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:30.893416  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:31.007610  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:31.189730  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:31.191114  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:31.390934  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:31.506597  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:31.691507  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:31.692096  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:31.893762  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:32.007409  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:32.189118  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:32.189337  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:32.390514  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:32.505842  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:32.690864  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:32.690921  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:32.890577  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:33.007100  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:33.189573  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:33.189987  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:33.390859  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:33.509546  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:33.693493  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:33.696806  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:33.893869  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:34.008377  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:34.192181  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:34.192843  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:34.391781  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:34.508735  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:34.692596  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:34.694656  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:34.893571  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:35.008476  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:35.189157  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:35.189207  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:35.394851  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:35.507292  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:35.691162  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:57:35.691345  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:35.892532  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:36.015064  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:36.190583  109098 kapi.go:107] duration metric: took 45.005352637s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:57:36.192565  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:36.392950  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:36.507645  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:36.694526  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:36.891953  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:37.394393  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:37.394633  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:37.394737  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:37.509779  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:37.689937  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:37.890239  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:38.007317  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:38.188611  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:38.390874  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:38.506366  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:38.695876  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:38.890946  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:39.007222  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:39.190583  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:39.392435  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:39.507042  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:39.691943  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:39.894962  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:40.009959  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:40.194990  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:40.392541  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:40.508575  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:40.692208  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:40.894795  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:41.008974  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:41.194091  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:41.392632  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:41.511092  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:41.690036  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:41.891454  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:42.008299  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:42.191177  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:42.392295  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:42.507305  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:42.982025  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:42.982221  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:43.008295  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:43.194758  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:43.391766  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:43.506864  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:43.689282  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:43.891396  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:44.006634  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:44.190039  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:44.391709  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:44.505993  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:44.691036  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:44.891859  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:45.009892  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:45.191213  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:45.390363  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:45.509410  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:45.690414  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:45.893062  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:46.008111  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:46.189265  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:46.391476  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:46.506264  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:46.692679  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:46.891181  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:46.931300  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:57:47.009835  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:47.198851  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:47.393513  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:47.507300  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:47.694677  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:47.891079  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:47.948594  109098 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.017248317s)
	W1018 08:57:47.948656  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:47.948707  109098 retry.go:31] will retry after 30.637624675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:57:48.006976  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:48.189863  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:48.391604  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:48.506890  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:48.694492  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:48.891653  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:49.008054  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:49.190985  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:49.394255  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:49.505552  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:49.689192  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:49.891007  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:50.006911  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:50.189886  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:50.391668  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:50.506605  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:50.690860  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:50.891898  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:51.008680  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:51.191409  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:51.391447  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:51.506614  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:51.691355  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:51.892121  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:52.007027  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:52.190595  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:52.392892  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:52.506132  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:52.693943  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:52.892539  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:53.013097  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:53.192470  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:53.393116  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:53.508917  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:53.691598  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:53.890875  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:54.006587  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:54.189092  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:54.396743  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:54.515993  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:54.694643  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:54.891531  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:55.008334  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:55.189706  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:55.397208  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:55.509802  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:55.691943  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:55.892588  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:56.012771  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:56.191984  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:56.398080  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:56.508442  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:56.689488  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:56.895073  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:57.007314  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:57.188312  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:57.393009  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:57.507629  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:57.692920  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:57.890483  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:58.006078  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:58.196363  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:58.392057  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:58.508392  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:58.690872  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:58.895892  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:59.007187  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:59.190622  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:59.393196  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:57:59.508948  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:57:59.692380  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:57:59.891191  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:00.009839  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:00.192487  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:00.391899  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:00.524303  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:00.825396  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:00.892125  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:01.011677  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:01.192525  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:01.392856  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:01.509425  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:01.688699  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:01.890245  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:02.007220  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:02.190025  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:02.391586  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:02.507342  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:02.692568  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:02.890705  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:03.009855  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:03.195774  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:03.571025  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:03.571122  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:03.691833  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:03.893490  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:04.008735  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:04.189294  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:04.390899  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:04.509661  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:04.694509  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:04.971548  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:05.009867  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:05.189550  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:05.395385  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:05.511216  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:05.691591  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:05.899321  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:06.244092  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:06.245669  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:06.394191  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:06.507266  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:06.688970  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:06.892989  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:07.006773  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:07.191771  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:07.392461  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:07.509260  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:07.859889  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:07.893242  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:08.009011  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:08.193834  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:08.396082  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:08.509595  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:08.690937  109098 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:58:08.914655  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:09.009241  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:09.192931  109098 kapi.go:107] duration metric: took 1m18.007701763s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:58:09.396991  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:09.507271  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:09.903011  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:10.019644  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:10.393548  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:10.511669  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:10.893684  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:11.007608  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:11.394360  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:11.507657  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:11.892609  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:12.009995  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:12.391984  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:12.507032  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:12.895262  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:13.008609  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:13.391620  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:13.507971  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:13.890579  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:14.106353  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:14.391384  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:14.505879  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:58:14.892547  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:15.007242  109098 kapi.go:107] duration metric: took 1m22.004977308s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:58:15.392848  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:15.890287  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:16.391977  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:16.891487  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:17.392121  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:17.891330  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:18.390492  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:18.586868  109098 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:58:18.892616  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:58:19.327739  109098 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:58:19.327814  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:58:19.327824  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:58:19.328161  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:58:19.328180  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	I1018 08:58:19.328187  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:58:19.328202  109098 main.go:141] libmachine: Making call to close driver server
	I1018 08:58:19.328213  109098 main.go:141] libmachine: (addons-281483) Calling .Close
	I1018 08:58:19.328473  109098 main.go:141] libmachine: Successfully made call to close driver server
	I1018 08:58:19.328495  109098 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 08:58:19.328478  109098 main.go:141] libmachine: (addons-281483) DBG | Closing plugin on server side
	W1018 08:58:19.328596  109098 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:58:19.391081  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:19.911622  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:20.392568  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:20.891487  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:21.391703  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:21.890664  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:22.391855  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:22.891178  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:23.390899  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:23.890254  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:24.391857  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:24.891289  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:25.391197  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:25.891235  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:26.391271  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:26.891273  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:27.391047  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:27.890787  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:28.391976  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:28.890511  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:29.391967  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:29.891310  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:30.391604  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:30.891223  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:31.390893  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:31.891080  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:32.391528  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:32.890432  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:33.390345  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:33.890905  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:34.391574  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:34.891558  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:35.391689  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:35.891094  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:36.390793  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:36.890539  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:37.391050  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:37.891373  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:38.390699  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:38.891292  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:39.392295  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:39.891731  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:40.393217  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:40.891697  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:41.390508  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:41.892165  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:42.391008  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:42.890554  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:43.390845  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:43.890708  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:44.392759  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:44.890949  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:45.390342  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:45.892279  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:46.392521  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:46.891410  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:47.391492  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:47.891846  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:48.392016  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:48.890667  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:49.391296  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:49.891187  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:50.391385  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:50.892027  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:51.391682  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:51.890310  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:52.391798  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:52.890623  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:53.391235  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:53.891643  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:54.391316  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:54.892498  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:55.391769  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:55.892003  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:56.391893  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:56.890410  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:57.391290  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:57.891074  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:58.391785  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:58.890985  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:59.390873  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:58:59.893002  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:00.390890  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:00.890917  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:01.391108  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:01.891381  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:02.391512  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:02.891921  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:03.391790  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:03.890615  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:04.391882  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:04.890279  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:05.391485  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:05.891515  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:06.391856  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:06.890123  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:07.392259  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:07.891203  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:08.392368  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:08.892547  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:09.392069  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:09.891582  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:10.391734  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:10.890490  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:11.394929  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:11.892630  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:12.391520  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:12.892421  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:13.394723  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:13.892634  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:14.392692  109098 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:14.892287  109098 kapi.go:107] duration metric: took 2m19.505201104s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:59:14.894223  109098 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-281483 cluster.
	I1018 08:59:14.895671  109098 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:59:14.897180  109098 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:59:14.898670  109098 out.go:179] * Enabled addons: ingress-dns, nvidia-device-plugin, registry-creds, storage-provisioner, amd-gpu-device-plugin, metrics-server, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 08:59:14.899983  109098 addons.go:514] duration metric: took 2m33.31149311s for enable addons: enabled=[ingress-dns nvidia-device-plugin registry-creds storage-provisioner amd-gpu-device-plugin metrics-server cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 08:59:14.900034  109098 start.go:246] waiting for cluster config update ...
	I1018 08:59:14.900052  109098 start.go:255] writing updated cluster config ...
	I1018 08:59:14.900399  109098 ssh_runner.go:195] Run: rm -f paused
	I1018 08:59:14.908079  109098 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:59:14.913528  109098 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mcrjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.920227  109098 pod_ready.go:94] pod "coredns-66bc5c9577-mcrjx" is "Ready"
	I1018 08:59:14.920260  109098 pod_ready.go:86] duration metric: took 6.705563ms for pod "coredns-66bc5c9577-mcrjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.923213  109098 pod_ready.go:83] waiting for pod "etcd-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.930556  109098 pod_ready.go:94] pod "etcd-addons-281483" is "Ready"
	I1018 08:59:14.930587  109098 pod_ready.go:86] duration metric: took 7.350834ms for pod "etcd-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.934908  109098 pod_ready.go:83] waiting for pod "kube-apiserver-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.941786  109098 pod_ready.go:94] pod "kube-apiserver-addons-281483" is "Ready"
	I1018 08:59:14.941812  109098 pod_ready.go:86] duration metric: took 6.881316ms for pod "kube-apiserver-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:14.992866  109098 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:15.313649  109098 pod_ready.go:94] pod "kube-controller-manager-addons-281483" is "Ready"
	I1018 08:59:15.313677  109098 pod_ready.go:86] duration metric: took 320.772183ms for pod "kube-controller-manager-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:15.513724  109098 pod_ready.go:83] waiting for pod "kube-proxy-m697j" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:15.913374  109098 pod_ready.go:94] pod "kube-proxy-m697j" is "Ready"
	I1018 08:59:15.913405  109098 pod_ready.go:86] duration metric: took 399.657528ms for pod "kube-proxy-m697j" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:16.114574  109098 pod_ready.go:83] waiting for pod "kube-scheduler-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:16.513716  109098 pod_ready.go:94] pod "kube-scheduler-addons-281483" is "Ready"
	I1018 08:59:16.513760  109098 pod_ready.go:86] duration metric: took 399.15614ms for pod "kube-scheduler-addons-281483" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:59:16.513777  109098 pod_ready.go:40] duration metric: took 1.605661714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:59:16.557761  109098 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 08:59:16.559648  109098 out.go:179] * Done! kubectl is now configured to use "addons-281483" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.536150829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6520d2bf-df12-48dc-9b77-b9a86248deae name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.536561234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d49b9eaccea6c19da904071889509e98e96343d8c4a705d3ecbc7b8e1b7d311,PodSandboxId:d76c364721c05ad395d1126bb9b12e83b95e9008cdd39be1c612840aa0195e99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760778007312143826,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8ed45683a97aaf4ff7606b6fb34f1653773ee41595ba1b90f94cf42e167ae1,PodSandboxId:cc9edfa835a30dbb86a621f3b5b6a57e94165cb4a1d79e38188d355d50f7c71e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760777960957204441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3848d01-e41f-467d-aa3b-5eb78fb5c1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a2da836a62d08f9858acc8a133b4b966a02fc769fa9b4c069a3818826207b8,PodSandboxId:476b02ff9e308cf8cd7ea75da046c87206003f1e4ea21efd5d503f44be83cdde,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760777888119478675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-wpj8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fddcd0cf-3c56-48c3-8a55-c596b40cfa13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb039d8b7304077b99c2a4430e62d8fba69e138aa086ca3d2827d4ca971d960f,PodSandboxId:d062339a1a3f94800599613d797890bc30cc63704967abff0f25343d102164e7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760777887959486072,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6mml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16dfd05b-8fd6-41a4-8d5b-7b12f576a519,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba65b18fed5ff10ca934a97f613d31a256942663a65673c4be1ffaa28d9fd4,PodSandboxId:e592cb111e240e70cb10acdffac6380481f8386f161692b5014ed28368b79e69,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760777874507051117,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gz4z5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14b441be-6791-44cc-b702-6fb26cfbc6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c45c4b6eef150afa7df3a277aeda1c2e1cecf6a004dc8d89c28bd5f9f9aa691,PodSandboxId:fcb7e9f7f3d6658190a4b9f797c0a2f76d8fa769d448aa7cb5d7b9ee08bd2648,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760777868534561943,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c8dxt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6dd63393-ff91-4b28-bcdb-e40921dc9b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bb463c6698e66c519f8c56a2269040354be45a3b7002571df6d6a12644c0fe,PodSandboxId:664d5966a97a42944eb42eed9aa96c78c08288ef2322d74d808fa9e787cb44c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760777851094014924,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a40e0a-1ad2-40ed-a7cb-1406b79007c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1eaecf5dcf9117bb35dad5527037d7bd12bf6500c23cf4824b051058a3939a,PodSandboxId:df7213422a6800ef26856cebbef5fa752d9fe2c1ddb06e664ee582127d18350b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760777811345719736,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6ms88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5daeed-6150-4bb9-89a0-3cf2f1273a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c580acbcbc35fe0fda47c3bedbbb9df73360212a8468020f0399f9affaa1e2,PodSandboxId:cd0403921809a2a0208a0398332a8adec450fb7b66c858091d247d300bbde39f,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760777811142490887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41059af3-156f-4248-a3c5-b068a6d1c84e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5def1f776ed15615a37611daf1785218b686a1067cac2248530d51b955ca8e,PodSandboxId:e78be7b6a68549870c6aeba4a1fd13933d6f233627c51dbdeeb0782e095ee4cc,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760777803846770716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mcrjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0b939e2616731fc658682bb44e2e01b00eef8060c758bb464eb4d95c9f586b,PodSandboxId:0dc76de9f1903ada577455cfc4fbc036b25ca8f49dd0a75c145588256f321398,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760777803169313739,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m697j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926c9399-fb0c-48b4-bd10-31524d6804a7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b9e343a3e5f57af547911e73472f914ef13df6462a8516d79278247a770199,PodSandboxId:01ce1df2dedba4edd6ad25bbb7e2ecbf970865d14020f5a02744b1f50a42fb55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760777791015120618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec6cf46a5a5fcf6656604bb5e0f1505,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a104833a65484c0ba7480db7fb1d65b86b33cafd58c1b9f2c3be34e2ede72c0,PodSandboxId:4eb9247d7fd9c701378e171a813a3a183da985096b0a7fcb3e79b91b757d3838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760777790970905476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416642e50f632d3c7bcc431bb5503b6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24384bb628890fbcf19a7458bd94242a8923eee1a650a7ded10531f715e6e4a,PodSandboxId:b1d81d65d6c619fe9b168992a5c9188f646d988afd3178c67684c50999e6848f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760777790969175196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2e5f6a73418
c605b442150589320ad,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddaa51678623400e17145b0fb07f0bdf6846d9635b19194ae97540b66b1112a7,PodSandboxId:20da6acc251504ee9d14226b2905dbab2188aa2e4c7972db79c6bd991fd3604d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760777790946314065,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73a8b72c150ca2a973c0ee48ca363a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6520d2bf-df12-48dc-9b77-b9a86248deae name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.581530862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd893923-f06d-40fe-9413-08a30282edf1 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.581763098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd893923-f06d-40fe-9413-08a30282edf1 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.583777252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2687adaf-3fcb-41bc-9d33-2dac11415eeb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.585130942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760778150585104862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2687adaf-3fcb-41bc-9d33-2dac11415eeb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.585810345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7206631a-0516-4e08-b55c-955fc1adfd3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.585886734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7206631a-0516-4e08-b55c-955fc1adfd3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.586249218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d49b9eaccea6c19da904071889509e98e96343d8c4a705d3ecbc7b8e1b7d311,PodSandboxId:d76c364721c05ad395d1126bb9b12e83b95e9008cdd39be1c612840aa0195e99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760778007312143826,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8ed45683a97aaf4ff7606b6fb34f1653773ee41595ba1b90f94cf42e167ae1,PodSandboxId:cc9edfa835a30dbb86a621f3b5b6a57e94165cb4a1d79e38188d355d50f7c71e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760777960957204441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3848d01-e41f-467d-aa3b-5eb78fb5c1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a2da836a62d08f9858acc8a133b4b966a02fc769fa9b4c069a3818826207b8,PodSandboxId:476b02ff9e308cf8cd7ea75da046c87206003f1e4ea21efd5d503f44be83cdde,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760777888119478675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-wpj8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fddcd0cf-3c56-48c3-8a55-c596b40cfa13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb039d8b7304077b99c2a4430e62d8fba69e138aa086ca3d2827d4ca971d960f,PodSandboxId:d062339a1a3f94800599613d797890bc30cc63704967abff0f25343d102164e7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760777887959486072,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6mml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16dfd05b-8fd6-41a4-8d5b-7b12f576a519,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba65b18fed5ff10ca934a97f613d31a256942663a65673c4be1ffaa28d9fd4,PodSandboxId:e592cb111e240e70cb10acdffac6380481f8386f161692b5014ed28368b79e69,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760777874507051117,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gz4z5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14b441be-6791-44cc-b702-6fb26cfbc6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c45c4b6eef150afa7df3a277aeda1c2e1cecf6a004dc8d89c28bd5f9f9aa691,PodSandboxId:fcb7e9f7f3d6658190a4b9f797c0a2f76d8fa769d448aa7cb5d7b9ee08bd2648,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760777868534561943,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c8dxt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6dd63393-ff91-4b28-bcdb-e40921dc9b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bb463c6698e66c519f8c56a2269040354be45a3b7002571df6d6a12644c0fe,PodSandboxId:664d5966a97a42944eb42eed9aa96c78c08288ef2322d74d808fa9e787cb44c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760777851094014924,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a40e0a-1ad2-40ed-a7cb-1406b79007c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1eaecf5dcf9117bb35dad5527037d7bd12bf6500c23cf4824b051058a3939a,PodSandboxId:df7213422a6800ef26856cebbef5fa752d9fe2c1ddb06e664ee582127d18350b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760777811345719736,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6ms88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5daeed-6150-4bb9-89a0-3cf2f1273a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c580acbcbc35fe0fda47c3bedbbb9df73360212a8468020f0399f9affaa1e2,PodSandboxId:cd0403921809a2a0208a0398332a8adec450fb7b66c858091d247d300bbde39f,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760777811142490887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41059af3-156f-4248-a3c5-b068a6d1c84e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5def1f776ed15615a37611daf1785218b686a1067cac2248530d51b955ca8e,PodSandboxId:e78be7b6a68549870c6aeba4a1fd13933d6f233627c51dbdeeb0782e095ee4cc,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760777803846770716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mcrjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0b939e2616731fc658682bb44e2e01b00eef8060c758bb464eb4d95c9f586b,PodSandboxId:0dc76de9f1903ada577455cfc4fbc036b25ca8f49dd0a75c145588256f321398,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760777803169313739,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m697j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926c9399-fb0c-48b4-bd10-31524d6804a7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b9e343a3e5f57af547911e73472f914ef13df6462a8516d79278247a770199,PodSandboxId:01ce1df2dedba4edd6ad25bbb7e2ecbf970865d14020f5a02744b1f50a42fb55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760777791015120618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec6cf46a5a5fcf6656604bb5e0f1505,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a104833a65484c0ba7480db7fb1d65b86b33cafd58c1b9f2c3be34e2ede72c0,PodSandboxId:4eb9247d7fd9c701378e171a813a3a183da985096b0a7fcb3e79b91b757d3838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760777790970905476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416642e50f632d3c7bcc431bb5503b6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24384bb628890fbcf19a7458bd94242a8923eee1a650a7ded10531f715e6e4a,PodSandboxId:b1d81d65d6c619fe9b168992a5c9188f646d988afd3178c67684c50999e6848f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760777790969175196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2e5f6a73418
c605b442150589320ad,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddaa51678623400e17145b0fb07f0bdf6846d9635b19194ae97540b66b1112a7,PodSandboxId:20da6acc251504ee9d14226b2905dbab2188aa2e4c7972db79c6bd991fd3604d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760777790946314065,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73a8b72c150ca2a973c0ee48ca363a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7206631a-0516-4e08-b55c-955fc1adfd3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.615846072Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.616117621Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.624600286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acc762c3-884d-4deb-8cf9-dd08b7f86a8b name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.624902346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acc762c3-884d-4deb-8cf9-dd08b7f86a8b name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.626183552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a606df6-6b32-4c3e-9c29-206d5433379a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.627757195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760778150627687280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a606df6-6b32-4c3e-9c29-206d5433379a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.628608324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbe6dfcf-ca55-488f-a19f-660523a74f62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.628748147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbe6dfcf-ca55-488f-a19f-660523a74f62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.629082356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d49b9eaccea6c19da904071889509e98e96343d8c4a705d3ecbc7b8e1b7d311,PodSandboxId:d76c364721c05ad395d1126bb9b12e83b95e9008cdd39be1c612840aa0195e99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760778007312143826,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8ed45683a97aaf4ff7606b6fb34f1653773ee41595ba1b90f94cf42e167ae1,PodSandboxId:cc9edfa835a30dbb86a621f3b5b6a57e94165cb4a1d79e38188d355d50f7c71e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760777960957204441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3848d01-e41f-467d-aa3b-5eb78fb5c1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a2da836a62d08f9858acc8a133b4b966a02fc769fa9b4c069a3818826207b8,PodSandboxId:476b02ff9e308cf8cd7ea75da046c87206003f1e4ea21efd5d503f44be83cdde,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760777888119478675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-wpj8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fddcd0cf-3c56-48c3-8a55-c596b40cfa13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb039d8b7304077b99c2a4430e62d8fba69e138aa086ca3d2827d4ca971d960f,PodSandboxId:d062339a1a3f94800599613d797890bc30cc63704967abff0f25343d102164e7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760777887959486072,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6mml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16dfd05b-8fd6-41a4-8d5b-7b12f576a519,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba65b18fed5ff10ca934a97f613d31a256942663a65673c4be1ffaa28d9fd4,PodSandboxId:e592cb111e240e70cb10acdffac6380481f8386f161692b5014ed28368b79e69,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760777874507051117,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gz4z5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14b441be-6791-44cc-b702-6fb26cfbc6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c45c4b6eef150afa7df3a277aeda1c2e1cecf6a004dc8d89c28bd5f9f9aa691,PodSandboxId:fcb7e9f7f3d6658190a4b9f797c0a2f76d8fa769d448aa7cb5d7b9ee08bd2648,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760777868534561943,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c8dxt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6dd63393-ff91-4b28-bcdb-e40921dc9b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bb463c6698e66c519f8c56a2269040354be45a3b7002571df6d6a12644c0fe,PodSandboxId:664d5966a97a42944eb42eed9aa96c78c08288ef2322d74d808fa9e787cb44c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760777851094014924,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a40e0a-1ad2-40ed-a7cb-1406b79007c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1eaecf5dcf9117bb35dad5527037d7bd12bf6500c23cf4824b051058a3939a,PodSandboxId:df7213422a6800ef26856cebbef5fa752d9fe2c1ddb06e664ee582127d18350b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760777811345719736,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6ms88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5daeed-6150-4bb9-89a0-3cf2f1273a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c580acbcbc35fe0fda47c3bedbbb9df73360212a8468020f0399f9affaa1e2,PodSandboxId:cd0403921809a2a0208a0398332a8adec450fb7b66c858091d247d300bbde39f,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760777811142490887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41059af3-156f-4248-a3c5-b068a6d1c84e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5def1f776ed15615a37611daf1785218b686a1067cac2248530d51b955ca8e,PodSandboxId:e78be7b6a68549870c6aeba4a1fd13933d6f233627c51dbdeeb0782e095ee4cc,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760777803846770716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mcrjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0b939e2616731fc658682bb44e2e01b00eef8060c758bb464eb4d95c9f586b,PodSandboxId:0dc76de9f1903ada577455cfc4fbc036b25ca8f49dd0a75c145588256f321398,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760777803169313739,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m697j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926c9399-fb0c-48b4-bd10-31524d6804a7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b9e343a3e5f57af547911e73472f914ef13df6462a8516d79278247a770199,PodSandboxId:01ce1df2dedba4edd6ad25bbb7e2ecbf970865d14020f5a02744b1f50a42fb55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760777791015120618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec6cf46a5a5fcf6656604bb5e0f1505,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a104833a65484c0ba7480db7fb1d65b86b33cafd58c1b9f2c3be34e2ede72c0,PodSandboxId:4eb9247d7fd9c701378e171a813a3a183da985096b0a7fcb3e79b91b757d3838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760777790970905476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416642e50f632d3c7bcc431bb5503b6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24384bb628890fbcf19a7458bd94242a8923eee1a650a7ded10531f715e6e4a,PodSandboxId:b1d81d65d6c619fe9b168992a5c9188f646d988afd3178c67684c50999e6848f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760777790969175196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2e5f6a73418
c605b442150589320ad,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddaa51678623400e17145b0fb07f0bdf6846d9635b19194ae97540b66b1112a7,PodSandboxId:20da6acc251504ee9d14226b2905dbab2188aa2e4c7972db79c6bd991fd3604d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760777790946314065,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73a8b72c150ca2a973c0ee48ca363a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbe6dfcf-ca55-488f-a19f-660523a74f62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.667598477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb807ddf-f152-45cf-9451-83d2fe0e6da5 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.667688455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb807ddf-f152-45cf-9451-83d2fe0e6da5 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.669733292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f911cd3e-725a-4586-84a4-3f954ade5c2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.671077973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760778150671047905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f911cd3e-725a-4586-84a4-3f954ade5c2b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.672063215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c1cdb37-de91-4685-a4f7-4a5faed9398e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.672271229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c1cdb37-de91-4685-a4f7-4a5faed9398e name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:02:30 addons-281483 crio[812]: time="2025-10-18 09:02:30.672738904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d49b9eaccea6c19da904071889509e98e96343d8c4a705d3ecbc7b8e1b7d311,PodSandboxId:d76c364721c05ad395d1126bb9b12e83b95e9008cdd39be1c612840aa0195e99,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760778007312143826,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0b89899-4a96-4e7d-83a7-2bf1d0fe72c7,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba8ed45683a97aaf4ff7606b6fb34f1653773ee41595ba1b90f94cf42e167ae1,PodSandboxId:cc9edfa835a30dbb86a621f3b5b6a57e94165cb4a1d79e38188d355d50f7c71e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760777960957204441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3848d01-e41f-467d-aa3b-5eb78fb5c1a2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a2da836a62d08f9858acc8a133b4b966a02fc769fa9b4c069a3818826207b8,PodSandboxId:476b02ff9e308cf8cd7ea75da046c87206003f1e4ea21efd5d503f44be83cdde,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760777888119478675,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-wpj8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fddcd0cf-3c56-48c3-8a55-c596b40cfa13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb039d8b7304077b99c2a4430e62d8fba69e138aa086ca3d2827d4ca971d960f,PodSandboxId:d062339a1a3f94800599613d797890bc30cc63704967abff0f25343d102164e7,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760777887959486072,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6mml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16dfd05b-8fd6-41a4-8d5b-7b12f576a519,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba65b18fed5ff10ca934a97f613d31a256942663a65673c4be1ffaa28d9fd4,PodSandboxId:e592cb111e240e70cb10acdffac6380481f8386f161692b5014ed28368b79e69,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760777874507051117,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gz4z5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14b441be-6791-44cc-b702-6fb26cfbc6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c45c4b6eef150afa7df3a277aeda1c2e1cecf6a004dc8d89c28bd5f9f9aa691,PodSandboxId:fcb7e9f7f3d6658190a4b9f797c0a2f76d8fa769d448aa7cb5d7b9ee08bd2648,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760777868534561943,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c8dxt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6dd63393-ff91-4b28-bcdb-e40921dc9b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bb463c6698e66c519f8c56a2269040354be45a3b7002571df6d6a12644c0fe,PodSandboxId:664d5966a97a42944eb42eed9aa96c78c08288ef2322d74d808fa9e787cb44c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760777851094014924,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a40e0a-1ad2-40ed-a7cb-1406b79007c5,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1eaecf5dcf9117bb35dad5527037d7bd12bf6500c23cf4824b051058a3939a,PodSandboxId:df7213422a6800ef26856cebbef5fa752d9fe2c1ddb06e664ee582127d18350b,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760777811345719736,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6ms88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5daeed-6150-4bb9-89a0-3cf2f1273a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c580acbcbc35fe0fda47c3bedbbb9df73360212a8468020f0399f9affaa1e2,PodSandboxId:cd0403921809a2a0208a0398332a8adec450fb7b66c858091d247d300bbde39f,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760777811142490887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41059af3-156f-4248-a3c5-b068a6d1c84e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5def1f776ed15615a37611daf1785218b686a1067cac2248530d51b955ca8e,PodSandboxId:e78be7b6a68549870c6aeba4a1fd13933d6f233627c51dbdeeb0782e095ee4cc,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760777803846770716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mcrjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4a8f109-6060-4ca0-a6d1-3ac2bf33b1eb,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0b939e2616731fc658682bb44e2e01b00eef8060c758bb464eb4d95c9f586b,PodSandboxId:0dc76de9f1903ada577455cfc4fbc036b25ca8f49dd0a75c145588256f321398,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760777803169313739,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m697j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926c9399-fb0c-48b4-bd10-31524d6804a7,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18b9e343a3e5f57af547911e73472f914ef13df6462a8516d79278247a770199,PodSandboxId:01ce1df2dedba4edd6ad25bbb7e2ecbf970865d14020f5a02744b1f50a42fb55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760777791015120618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec6cf46a5a5fcf6656604bb5e0f1505,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPo
rt\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a104833a65484c0ba7480db7fb1d65b86b33cafd58c1b9f2c3be34e2ede72c0,PodSandboxId:4eb9247d7fd9c701378e171a813a3a183da985096b0a7fcb3e79b91b757d3838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760777790970905476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416642e50f632d3c7bcc431bb5503b6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24384bb628890fbcf19a7458bd94242a8923eee1a650a7ded10531f715e6e4a,PodSandboxId:b1d81d65d6c619fe9b168992a5c9188f646d988afd3178c67684c50999e6848f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760777790969175196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2e5f6a73418
c605b442150589320ad,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddaa51678623400e17145b0fb07f0bdf6846d9635b19194ae97540b66b1112a7,PodSandboxId:20da6acc251504ee9d14226b2905dbab2188aa2e4c7972db79c6bd991fd3604d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760777790946314065,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-281483,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e73a8b72c150ca2a973c0ee48ca363a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c1cdb37-de91-4685-a4f7-4a5faed9398e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d49b9eaccea6       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   d76c364721c05       nginx
	ba8ed45683a97       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   cc9edfa835a30       busybox
	93a2da836a62d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             4 minutes ago       Running             controller                0                   476b02ff9e308       ingress-nginx-controller-675c5ddd98-wpj8k
	cb039d8b73040       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     2                   d062339a1a3f9       ingress-nginx-admission-patch-h6mml
	26ba65b18fed5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   e592cb111e240       ingress-nginx-admission-create-gz4z5
	8c45c4b6eef15       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   fcb7e9f7f3d66       gadget-c8dxt
	82bb463c6698e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   664d5966a97a4       kube-ingress-dns-minikube
	2b1eaecf5dcf9       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   df7213422a680       amd-gpu-device-plugin-6ms88
	b9c580acbcbc3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   cd0403921809a       storage-provisioner
	1d5def1f776ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   e78be7b6a6854       coredns-66bc5c9577-mcrjx
	4d0b939e26167       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   0dc76de9f1903       kube-proxy-m697j
	18b9e343a3e5f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   01ce1df2dedba       kube-scheduler-addons-281483
	2a104833a6548       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   4eb9247d7fd9c       kube-apiserver-addons-281483
	d24384bb62889       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   b1d81d65d6c61       etcd-addons-281483
	ddaa516786234       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   20da6acc25150       kube-controller-manager-addons-281483
	
	
	==> coredns [1d5def1f776ed15615a37611daf1785218b686a1067cac2248530d51b955ca8e] <==
	[INFO] 10.244.0.8:56289 - 65233 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000154644s
	[INFO] 10.244.0.8:56289 - 38454 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000116602s
	[INFO] 10.244.0.8:56289 - 9936 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000103353s
	[INFO] 10.244.0.8:56289 - 61129 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076649s
	[INFO] 10.244.0.8:56289 - 16584 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00030703s
	[INFO] 10.244.0.8:56289 - 33775 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000241566s
	[INFO] 10.244.0.8:56289 - 18049 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000623841s
	[INFO] 10.244.0.8:37733 - 10950 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142598s
	[INFO] 10.244.0.8:37733 - 11234 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136538s
	[INFO] 10.244.0.8:54489 - 2935 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113421s
	[INFO] 10.244.0.8:54489 - 2700 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078639s
	[INFO] 10.244.0.8:50034 - 54484 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095287s
	[INFO] 10.244.0.8:50034 - 54220 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141856s
	[INFO] 10.244.0.8:59700 - 26629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150585s
	[INFO] 10.244.0.8:59700 - 27051 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000943473s
	[INFO] 10.244.0.23:53345 - 19591 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0006997s
	[INFO] 10.244.0.23:52055 - 53831 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000146955s
	[INFO] 10.244.0.23:53561 - 55674 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109777s
	[INFO] 10.244.0.23:52661 - 35983 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000279745s
	[INFO] 10.244.0.23:39053 - 53446 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010896s
	[INFO] 10.244.0.23:39975 - 24503 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014003s
	[INFO] 10.244.0.23:56396 - 14229 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.003145987s
	[INFO] 10.244.0.23:47772 - 54613 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003295145s
	[INFO] 10.244.0.27:58784 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000301715s
	[INFO] 10.244.0.27:49551 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000206087s
	
	
	==> describe nodes <==
	Name:               addons-281483
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-281483
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=addons-281483
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_56_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-281483
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:56:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-281483
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:02:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:00:11 +0000   Sat, 18 Oct 2025 08:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:00:11 +0000   Sat, 18 Oct 2025 08:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:00:11 +0000   Sat, 18 Oct 2025 08:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:00:11 +0000   Sat, 18 Oct 2025 08:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    addons-281483
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d85d66c12e64b3eaef38ab5ceef778c
	  System UUID:                9d85d66c-12e6-4b3e-aef3-8ab5ceef778c
	  Boot ID:                    b7682f1f-6e1a-4069-93b0-e782d9abc35b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     hello-world-app-5d498dc89-sqp9h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  gadget                      gadget-c8dxt                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-wpj8k    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m40s
	  kube-system                 amd-gpu-device-plugin-6ms88                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 coredns-66bc5c9577-mcrjx                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m48s
	  kube-system                 etcd-addons-281483                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m53s
	  kube-system                 kube-apiserver-addons-281483                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-addons-281483        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-m697j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-scheduler-addons-281483                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m46s  kube-proxy       
	  Normal  Starting                 5m53s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m53s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s  kubelet          Node addons-281483 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s  kubelet          Node addons-281483 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s  kubelet          Node addons-281483 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m53s  kubelet          Node addons-281483 status is now: NodeReady
	  Normal  RegisteredNode           5m49s  node-controller  Node addons-281483 event: Registered Node addons-281483 in Controller
	
	
	==> dmesg <==
	[Oct18 08:57] kauditd_printk_skb: 347 callbacks suppressed
	[  +6.938035] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.427789] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.427889] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.593178] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.424759] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.842422] kauditd_printk_skb: 50 callbacks suppressed
	[  +3.000283] kauditd_printk_skb: 131 callbacks suppressed
	[Oct18 08:58] kauditd_printk_skb: 86 callbacks suppressed
	[  +1.855288] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.969486] kauditd_printk_skb: 61 callbacks suppressed
	[Oct18 08:59] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.296751] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.468714] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.642030] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.990986] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.917453] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 114 callbacks suppressed
	[  +1.370559] kauditd_printk_skb: 191 callbacks suppressed
	[Oct18 09:00] kauditd_printk_skb: 157 callbacks suppressed
	[  +4.654085] kauditd_printk_skb: 55 callbacks suppressed
	[  +0.001674] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.843196] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.111085] kauditd_printk_skb: 127 callbacks suppressed
	[Oct18 09:02] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [d24384bb628890fbcf19a7458bd94242a8923eee1a650a7ded10531f715e6e4a] <==
	{"level":"info","ts":"2025-10-18T08:58:00.820240Z","caller":"traceutil/trace.go:172","msg":"trace[1064307755] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"135.342483ms","start":"2025-10-18T08:58:00.684880Z","end":"2025-10-18T08:58:00.820223Z","steps":["trace[1064307755] 'range keys from in-memory index tree'  (duration: 134.443956ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:03.559522Z","caller":"traceutil/trace.go:172","msg":"trace[2088362477] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1139; }","duration":"173.821178ms","start":"2025-10-18T08:58:03.385655Z","end":"2025-10-18T08:58:03.559476Z","steps":["trace[2088362477] 'read index received'  (duration: 173.81591ms)","trace[2088362477] 'applied index is now lower than readState.Index'  (duration: 4.167µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T08:58:03.562491Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.832271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:58:03.562567Z","caller":"traceutil/trace.go:172","msg":"trace[1976535914] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1108; }","duration":"176.920284ms","start":"2025-10-18T08:58:03.385636Z","end":"2025-10-18T08:58:03.562557Z","steps":["trace[1976535914] 'agreement among raft nodes before linearized reading'  (duration: 176.796693ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:03.561941Z","caller":"traceutil/trace.go:172","msg":"trace[920049716] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"252.835982ms","start":"2025-10-18T08:58:03.309090Z","end":"2025-10-18T08:58:03.561926Z","steps":["trace[920049716] 'process raft request'  (duration: 252.713165ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:58:03.562900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.689042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-18T08:58:03.562922Z","caller":"traceutil/trace.go:172","msg":"trace[1624701148] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1108; }","duration":"163.715982ms","start":"2025-10-18T08:58:03.399198Z","end":"2025-10-18T08:58:03.562914Z","steps":["trace[1624701148] 'agreement among raft nodes before linearized reading'  (duration: 163.620791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:58:04.963960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.33146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-fqkkr\" limit:1 ","response":"range_response_count:1 size:3841"}
	{"level":"info","ts":"2025-10-18T08:58:04.964124Z","caller":"traceutil/trace.go:172","msg":"trace[1794934859] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-create-fqkkr; range_end:; response_count:1; response_revision:1119; }","duration":"166.456868ms","start":"2025-10-18T08:58:04.797606Z","end":"2025-10-18T08:58:04.964063Z","steps":["trace[1794934859] 'range keys from in-memory index tree'  (duration: 166.081272ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:06.236693Z","caller":"traceutil/trace.go:172","msg":"trace[814094277] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1152; }","duration":"230.251725ms","start":"2025-10-18T08:58:06.006377Z","end":"2025-10-18T08:58:06.236629Z","steps":["trace[814094277] 'read index received'  (duration: 230.242125ms)","trace[814094277] 'applied index is now lower than readState.Index'  (duration: 8.071µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T08:58:06.236838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"230.447012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:58:06.236860Z","caller":"traceutil/trace.go:172","msg":"trace[2045972307] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1120; }","duration":"230.479803ms","start":"2025-10-18T08:58:06.006373Z","end":"2025-10-18T08:58:06.236853Z","steps":["trace[2045972307] 'agreement among raft nodes before linearized reading'  (duration: 230.416143ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T08:58:06.237206Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.454986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-66bc5c9577-mcrjx.186f8a152967d021\" limit:1 ","response":"range_response_count:1 size:823"}
	{"level":"info","ts":"2025-10-18T08:58:06.237320Z","caller":"traceutil/trace.go:172","msg":"trace[512904585] range","detail":"{range_begin:/registry/events/kube-system/coredns-66bc5c9577-mcrjx.186f8a152967d021; range_end:; response_count:1; response_revision:1120; }","duration":"106.572298ms","start":"2025-10-18T08:58:06.130738Z","end":"2025-10-18T08:58:06.237311Z","steps":["trace[512904585] 'agreement among raft nodes before linearized reading'  (duration: 106.246205ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:07.852493Z","caller":"traceutil/trace.go:172","msg":"trace[878857505] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1157; }","duration":"168.461728ms","start":"2025-10-18T08:58:07.684012Z","end":"2025-10-18T08:58:07.852474Z","steps":["trace[878857505] 'read index received'  (duration: 168.455271ms)","trace[878857505] 'applied index is now lower than readState.Index'  (duration: 5.057µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T08:58:07.852668Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.649289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T08:58:07.852695Z","caller":"traceutil/trace.go:172","msg":"trace[1088061405] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1124; }","duration":"168.692657ms","start":"2025-10-18T08:58:07.683993Z","end":"2025-10-18T08:58:07.852686Z","steps":["trace[1088061405] 'agreement among raft nodes before linearized reading'  (duration: 168.612341ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:07.853117Z","caller":"traceutil/trace.go:172","msg":"trace[1034027905] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"252.991392ms","start":"2025-10-18T08:58:07.600115Z","end":"2025-10-18T08:58:07.853106Z","steps":["trace[1034027905] 'process raft request'  (duration: 252.835346ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:10.116751Z","caller":"traceutil/trace.go:172","msg":"trace[1933487023] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"108.367264ms","start":"2025-10-18T08:58:10.008361Z","end":"2025-10-18T08:58:10.116728Z","steps":["trace[1933487023] 'process raft request'  (duration: 108.257817ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:10.247642Z","caller":"traceutil/trace.go:172","msg":"trace[2045540431] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"116.99445ms","start":"2025-10-18T08:58:10.130632Z","end":"2025-10-18T08:58:10.247627Z","steps":["trace[2045540431] 'process raft request'  (duration: 107.275335ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:14.097984Z","caller":"traceutil/trace.go:172","msg":"trace[1705990554] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"101.808493ms","start":"2025-10-18T08:58:13.996160Z","end":"2025-10-18T08:58:14.097968Z","steps":["trace[1705990554] 'process raft request'  (duration: 101.691044ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:58:19.800747Z","caller":"traceutil/trace.go:172","msg":"trace[1075823104] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"123.918525ms","start":"2025-10-18T08:58:19.676812Z","end":"2025-10-18T08:58:19.800730Z","steps":["trace[1075823104] 'process raft request'  (duration: 123.798116ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:59:52.012188Z","caller":"traceutil/trace.go:172","msg":"trace[474176566] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"109.391474ms","start":"2025-10-18T08:59:51.902770Z","end":"2025-10-18T08:59:52.012162Z","steps":["trace[474176566] 'process raft request'  (duration: 109.290336ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T08:59:52.205993Z","caller":"traceutil/trace.go:172","msg":"trace[217960243] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"123.621622ms","start":"2025-10-18T08:59:52.082353Z","end":"2025-10-18T08:59:52.205974Z","steps":["trace[217960243] 'process raft request'  (duration: 119.839662ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:00:00.962243Z","caller":"traceutil/trace.go:172","msg":"trace[1381266913] transaction","detail":"{read_only:false; response_revision:1652; number_of_response:1; }","duration":"129.95532ms","start":"2025-10-18T09:00:00.832275Z","end":"2025-10-18T09:00:00.962230Z","steps":["trace[1381266913] 'process raft request'  (duration: 129.858171ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:02:31 up 6 min,  0 users,  load average: 0.39, 1.11, 0.67
	Linux addons-281483 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2a104833a65484c0ba7480db7fb1d65b86b33cafd58c1b9f2c3be34e2ede72c0] <==
	E1018 08:57:40.165357       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.52.64:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.52.64:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.52.64:443: connect: connection refused" logger="UnhandledError"
	I1018 08:57:40.306299       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:59:27.372364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.144:8443->192.168.39.1:51996: use of closed network connection
	E1018 08:59:27.569899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.144:8443->192.168.39.1:52036: use of closed network connection
	I1018 08:59:36.844868       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.81.69"}
	I1018 08:59:55.841249       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 08:59:56.058298       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.183.228"}
	I1018 09:00:07.457451       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 09:00:13.893485       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1018 09:00:24.938960       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 09:00:24.939011       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 09:00:24.962202       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 09:00:24.963177       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 09:00:24.999326       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 09:00:25.001133       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 09:00:25.018582       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 09:00:25.018640       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 09:00:25.043486       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 09:00:25.043578       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1018 09:00:25.136078       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1018 09:00:26.005327       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 09:00:26.044028       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1018 09:00:26.072239       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1018 09:00:41.192099       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 09:02:29.336728       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.233.63"}
	
	
	==> kube-controller-manager [ddaa51678623400e17145b0fb07f0bdf6846d9635b19194ae97540b66b1112a7] <==
	I1018 09:00:41.413265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:00:41.447989       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:00:41.448124       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 09:00:44.868684       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:00:44.869536       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:00:44.870554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:00:44.871145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:00:47.692123       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:00:47.693488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:04.051384       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:04.052601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:09.880743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:09.881893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:10.063767       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:10.064978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:36.648782       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:36.649791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:40.615871       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:40.617164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:01:54.226019       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:01:54.227344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:02:25.375463       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:02:25.376821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 09:02:28.175607       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 09:02:28.176491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [4d0b939e2616731fc658682bb44e2e01b00eef8060c758bb464eb4d95c9f586b] <==
	I1018 08:56:43.769297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:56:43.929117       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:56:43.929162       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.144"]
	E1018 08:56:43.929245       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:56:44.087774       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1018 08:56:44.087878       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 08:56:44.087908       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:56:44.231313       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:56:44.239978       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:56:44.249572       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:56:44.316563       1 config.go:200] "Starting service config controller"
	I1018 08:56:44.316589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:56:44.316610       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:56:44.316614       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:56:44.316624       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:56:44.316628       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:56:44.317255       1 config.go:309] "Starting node config controller"
	I1018 08:56:44.317281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:56:44.317287       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:56:44.417171       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:56:44.417196       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:56:44.417234       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [18b9e343a3e5f57af547911e73472f914ef13df6462a8516d79278247a770199] <==
	E1018 08:56:34.322363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:56:34.322407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:56:34.322438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:56:34.322481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:56:34.322564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:56:34.323381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:56:34.323563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:56:34.323718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:56:34.324491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:56:34.325773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:56:34.326217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:56:34.326577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:56:35.134785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:56:35.195835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:56:35.227223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:56:35.247537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:56:35.274082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:56:35.288009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:56:35.326094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:56:35.352668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:56:35.352789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:56:35.359406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:56:35.543290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:56:35.795091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 08:56:38.203621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:00:47 addons-281483 kubelet[1504]: E1018 09:00:47.452237    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778047451736075  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:00:57 addons-281483 kubelet[1504]: E1018 09:00:57.455410    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778057454769215  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:00:57 addons-281483 kubelet[1504]: E1018 09:00:57.455455    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778057454769215  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:07 addons-281483 kubelet[1504]: E1018 09:01:07.458246    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778067457830186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:07 addons-281483 kubelet[1504]: E1018 09:01:07.458277    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778067457830186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:17 addons-281483 kubelet[1504]: E1018 09:01:17.461689    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778077461119068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:17 addons-281483 kubelet[1504]: E1018 09:01:17.461733    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778077461119068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:27 addons-281483 kubelet[1504]: E1018 09:01:27.464886    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778087464329636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:27 addons-281483 kubelet[1504]: E1018 09:01:27.464914    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778087464329636  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:37 addons-281483 kubelet[1504]: E1018 09:01:37.467896    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778097467552170  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:37 addons-281483 kubelet[1504]: E1018 09:01:37.467926    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778097467552170  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:41 addons-281483 kubelet[1504]: I1018 09:01:41.126605    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-mcrjx" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:01:47 addons-281483 kubelet[1504]: E1018 09:01:47.471702    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778107471256870  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:47 addons-281483 kubelet[1504]: E1018 09:01:47.471750    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778107471256870  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:51 addons-281483 kubelet[1504]: I1018 09:01:51.131566    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:01:57 addons-281483 kubelet[1504]: E1018 09:01:57.475097    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778117474358617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:01:57 addons-281483 kubelet[1504]: E1018 09:01:57.475122    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778117474358617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:02 addons-281483 kubelet[1504]: I1018 09:02:02.126727    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6ms88" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:02:07 addons-281483 kubelet[1504]: E1018 09:02:07.479243    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778127478553940  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:07 addons-281483 kubelet[1504]: E1018 09:02:07.479284    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778127478553940  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:17 addons-281483 kubelet[1504]: E1018 09:02:17.482277    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778137481901822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:17 addons-281483 kubelet[1504]: E1018 09:02:17.482326    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778137481901822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:27 addons-281483 kubelet[1504]: E1018 09:02:27.485277    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760778147484677500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:27 addons-281483 kubelet[1504]: E1018 09:02:27.485307    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760778147484677500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 18 09:02:29 addons-281483 kubelet[1504]: I1018 09:02:29.351281    1504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kghr\" (UniqueName: \"kubernetes.io/projected/55018abc-b5e7-4288-8939-cf1851c682dc-kube-api-access-2kghr\") pod \"hello-world-app-5d498dc89-sqp9h\" (UID: \"55018abc-b5e7-4288-8939-cf1851c682dc\") " pod="default/hello-world-app-5d498dc89-sqp9h"
	
	
	==> storage-provisioner [b9c580acbcbc35fe0fda47c3bedbbb9df73360212a8468020f0399f9affaa1e2] <==
	W1018 09:02:05.721791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:07.725558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:07.734195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:09.738156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:09.744485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:11.748203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:11.753578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:13.757939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:13.763202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:15.767721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:15.775440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:17.780387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:17.787393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:19.791290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:19.798994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:21.802839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:21.809800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:23.814348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:23.820372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:25.824911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:25.830324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:27.834619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:27.842757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:29.846889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:02:29.861291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-281483 -n addons-281483
helpers_test.go:269: (dbg) Run:  kubectl --context addons-281483 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sqp9h ingress-nginx-admission-create-gz4z5 ingress-nginx-admission-patch-h6mml
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-281483 describe pod hello-world-app-5d498dc89-sqp9h ingress-nginx-admission-create-gz4z5 ingress-nginx-admission-patch-h6mml
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-281483 describe pod hello-world-app-5d498dc89-sqp9h ingress-nginx-admission-create-gz4z5 ingress-nginx-admission-patch-h6mml: exit status 1 (80.846608ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sqp9h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-281483/192.168.39.144
	Start Time:       Sat, 18 Oct 2025 09:02:29 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2kghr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2kghr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sqp9h to addons-281483
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gz4z5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h6mml" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-281483 describe pod hello-world-app-5d498dc89-sqp9h ingress-nginx-admission-create-gz4z5 ingress-nginx-admission-patch-h6mml: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable ingress-dns --alsologtostderr -v=1: (1.151322881s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable ingress --alsologtostderr -v=1: (7.785581437s)
--- FAIL: TestAddons/parallel/Ingress (165.36s)

                                                
                                    
x
+
TestPreload (131.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-081901 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1018 09:45:19.370431  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-081901 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m1.552968756s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-081901 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-081901 image pull gcr.io/k8s-minikube/busybox: (3.520593868s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-081901
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-081901: (6.757104486s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-081901 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:47:16.304675  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-081901 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.533123312s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-081901 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-18 09:47:19.223025109 +0000 UTC m=+3117.227535758
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-081901 -n test-preload-081901
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-081901 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-081901 logs -n 25: (1.185071259s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-670094 ssh -n multinode-670094-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ ssh     │ multinode-670094 ssh -n multinode-670094 sudo cat /home/docker/cp-test_multinode-670094-m03_multinode-670094.txt                                                                    │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ cp      │ multinode-670094 cp multinode-670094-m03:/home/docker/cp-test.txt multinode-670094-m02:/home/docker/cp-test_multinode-670094-m03_multinode-670094-m02.txt                           │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ ssh     │ multinode-670094 ssh -n multinode-670094-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ ssh     │ multinode-670094 ssh -n multinode-670094-m02 sudo cat /home/docker/cp-test_multinode-670094-m03_multinode-670094-m02.txt                                                            │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ node    │ multinode-670094 node stop m03                                                                                                                                                      │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ node    │ multinode-670094 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ node    │ list -p multinode-670094                                                                                                                                                            │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p multinode-670094                                                                                                                                                                 │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:38 UTC │
	│ start   │ -p multinode-670094 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:38 UTC │ 18 Oct 25 09:40 UTC │
	│ node    │ list -p multinode-670094                                                                                                                                                            │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │                     │
	│ node    │ multinode-670094 node delete m03                                                                                                                                                    │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ stop    │ multinode-670094 stop                                                                                                                                                               │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p multinode-670094 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ node    │ list -p multinode-670094                                                                                                                                                            │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p multinode-670094-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-670094-m02 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p multinode-670094-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-670094-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ node    │ add -p multinode-670094                                                                                                                                                             │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ delete  │ -p multinode-670094-m03                                                                                                                                                             │ multinode-670094-m03 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ delete  │ -p multinode-670094                                                                                                                                                                 │ multinode-670094     │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p test-preload-081901 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-081901  │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ image   │ test-preload-081901 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-081901  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ stop    │ -p test-preload-081901                                                                                                                                                              │ test-preload-081901  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ start   │ -p test-preload-081901 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-081901  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:47 UTC │
	│ image   │ test-preload-081901 image list                                                                                                                                                      │ test-preload-081901  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:46:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:46:22.510867  138443 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:46:22.511109  138443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:22.511118  138443 out.go:374] Setting ErrFile to fd 2...
	I1018 09:46:22.511122  138443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:22.511325  138443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:46:22.511807  138443 out.go:368] Setting JSON to false
	I1018 09:46:22.512657  138443 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5323,"bootTime":1760775460,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:46:22.512751  138443 start.go:141] virtualization: kvm guest
	I1018 09:46:22.515060  138443 out.go:179] * [test-preload-081901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:46:22.516552  138443 notify.go:220] Checking for updates...
	I1018 09:46:22.516620  138443 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:46:22.518121  138443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:46:22.519678  138443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:46:22.521231  138443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:46:22.522747  138443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:46:22.524251  138443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:46:22.526386  138443 config.go:182] Loaded profile config "test-preload-081901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:46:22.526966  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:46:22.527051  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:46:22.541830  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I1018 09:46:22.542343  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:46:22.542937  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:46:22.542961  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:46:22.543370  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:46:22.543620  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:22.545889  138443 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 09:46:22.547488  138443 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:46:22.547946  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:46:22.548002  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:46:22.561752  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I1018 09:46:22.562274  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:46:22.562798  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:46:22.562876  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:46:22.563253  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:46:22.563480  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:22.601991  138443 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 09:46:22.603638  138443 start.go:305] selected driver: kvm2
	I1018 09:46:22.603658  138443 start.go:925] validating driver "kvm2" against &{Name:test-preload-081901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-081901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:22.603780  138443 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:46:22.604503  138443 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:46:22.604598  138443 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:46:22.620024  138443 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:46:22.620066  138443 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:46:22.634255  138443 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:46:22.634591  138443 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:46:22.634617  138443 cni.go:84] Creating CNI manager for ""
	I1018 09:46:22.634680  138443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:46:22.634741  138443 start.go:349] cluster config:
	{Name:test-preload-081901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-081901 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:22.634837  138443 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:46:22.636767  138443 out.go:179] * Starting "test-preload-081901" primary control-plane node in "test-preload-081901" cluster
	I1018 09:46:22.638096  138443 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:46:23.480556  138443 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 09:46:23.480609  138443 cache.go:58] Caching tarball of preloaded images
	I1018 09:46:23.480788  138443 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:46:23.482725  138443 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1018 09:46:23.484199  138443 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 09:46:23.586672  138443 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1018 09:46:23.586730  138443 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1018 09:46:32.670691  138443 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1018 09:46:32.670837  138443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/config.json ...
	I1018 09:46:32.671694  138443 start.go:360] acquireMachinesLock for test-preload-081901: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:46:32.671776  138443 start.go:364] duration metric: took 50.308µs to acquireMachinesLock for "test-preload-081901"
	I1018 09:46:32.671790  138443 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:46:32.671795  138443 fix.go:54] fixHost starting: 
	I1018 09:46:32.672061  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:46:32.672098  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:46:32.685585  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I1018 09:46:32.686103  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:46:32.686641  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:46:32.686671  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:46:32.687058  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:46:32.687304  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:32.687455  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetState
	I1018 09:46:32.689477  138443 fix.go:112] recreateIfNeeded on test-preload-081901: state=Stopped err=<nil>
	I1018 09:46:32.689503  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	W1018 09:46:32.689659  138443 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:46:32.691449  138443 out.go:252] * Restarting existing kvm2 VM for "test-preload-081901" ...
	I1018 09:46:32.691480  138443 main.go:141] libmachine: (test-preload-081901) Calling .Start
	I1018 09:46:32.691652  138443 main.go:141] libmachine: (test-preload-081901) starting domain...
	I1018 09:46:32.691678  138443 main.go:141] libmachine: (test-preload-081901) ensuring networks are active...
	I1018 09:46:32.692442  138443 main.go:141] libmachine: (test-preload-081901) Ensuring network default is active
	I1018 09:46:32.692879  138443 main.go:141] libmachine: (test-preload-081901) Ensuring network mk-test-preload-081901 is active
	I1018 09:46:32.693387  138443 main.go:141] libmachine: (test-preload-081901) getting domain XML...
	I1018 09:46:32.694494  138443 main.go:141] libmachine: (test-preload-081901) DBG | starting domain XML:
	I1018 09:46:32.694512  138443 main.go:141] libmachine: (test-preload-081901) DBG | <domain type='kvm'>
	I1018 09:46:32.694520  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <name>test-preload-081901</name>
	I1018 09:46:32.694536  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <uuid>0f3b7452-dca9-471a-8842-c5b690e04765</uuid>
	I1018 09:46:32.694557  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 09:46:32.694570  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 09:46:32.694580  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 09:46:32.694587  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <os>
	I1018 09:46:32.694593  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 09:46:32.694598  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <boot dev='cdrom'/>
	I1018 09:46:32.694603  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <boot dev='hd'/>
	I1018 09:46:32.694610  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <bootmenu enable='no'/>
	I1018 09:46:32.694615  138443 main.go:141] libmachine: (test-preload-081901) DBG |   </os>
	I1018 09:46:32.694638  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <features>
	I1018 09:46:32.694668  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <acpi/>
	I1018 09:46:32.694692  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <apic/>
	I1018 09:46:32.694703  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <pae/>
	I1018 09:46:32.694710  138443 main.go:141] libmachine: (test-preload-081901) DBG |   </features>
	I1018 09:46:32.694724  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 09:46:32.694733  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <clock offset='utc'/>
	I1018 09:46:32.694743  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 09:46:32.694751  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <on_reboot>restart</on_reboot>
	I1018 09:46:32.694761  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <on_crash>destroy</on_crash>
	I1018 09:46:32.694773  138443 main.go:141] libmachine: (test-preload-081901) DBG |   <devices>
	I1018 09:46:32.694783  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 09:46:32.694796  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <disk type='file' device='cdrom'>
	I1018 09:46:32.694806  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <driver name='qemu' type='raw'/>
	I1018 09:46:32.694823  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/boot2docker.iso'/>
	I1018 09:46:32.694837  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 09:46:32.694844  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <readonly/>
	I1018 09:46:32.694853  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 09:46:32.694862  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </disk>
	I1018 09:46:32.694871  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <disk type='file' device='disk'>
	I1018 09:46:32.694883  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 09:46:32.694914  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/test-preload-081901.rawdisk'/>
	I1018 09:46:32.694931  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <target dev='hda' bus='virtio'/>
	I1018 09:46:32.694945  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 09:46:32.694954  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </disk>
	I1018 09:46:32.694962  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 09:46:32.694972  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 09:46:32.694980  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </controller>
	I1018 09:46:32.694986  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 09:46:32.694992  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 09:46:32.695000  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 09:46:32.695031  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </controller>
	I1018 09:46:32.695054  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <interface type='network'>
	I1018 09:46:32.695066  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <mac address='52:54:00:fd:01:25'/>
	I1018 09:46:32.695079  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <source network='mk-test-preload-081901'/>
	I1018 09:46:32.695093  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <model type='virtio'/>
	I1018 09:46:32.695103  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 09:46:32.695117  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </interface>
	I1018 09:46:32.695134  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <interface type='network'>
	I1018 09:46:32.695167  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <mac address='52:54:00:56:b8:a7'/>
	I1018 09:46:32.695184  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <source network='default'/>
	I1018 09:46:32.695197  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <model type='virtio'/>
	I1018 09:46:32.695210  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 09:46:32.695221  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </interface>
	I1018 09:46:32.695229  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <serial type='pty'>
	I1018 09:46:32.695241  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <target type='isa-serial' port='0'>
	I1018 09:46:32.695256  138443 main.go:141] libmachine: (test-preload-081901) DBG |         <model name='isa-serial'/>
	I1018 09:46:32.695268  138443 main.go:141] libmachine: (test-preload-081901) DBG |       </target>
	I1018 09:46:32.695278  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </serial>
	I1018 09:46:32.695288  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <console type='pty'>
	I1018 09:46:32.695299  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <target type='serial' port='0'/>
	I1018 09:46:32.695309  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </console>
	I1018 09:46:32.695320  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <input type='mouse' bus='ps2'/>
	I1018 09:46:32.695332  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 09:46:32.695345  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <audio id='1' type='none'/>
	I1018 09:46:32.695353  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <memballoon model='virtio'>
	I1018 09:46:32.695367  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 09:46:32.695377  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </memballoon>
	I1018 09:46:32.695385  138443 main.go:141] libmachine: (test-preload-081901) DBG |     <rng model='virtio'>
	I1018 09:46:32.695400  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <backend model='random'>/dev/random</backend>
	I1018 09:46:32.695415  138443 main.go:141] libmachine: (test-preload-081901) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 09:46:32.695425  138443 main.go:141] libmachine: (test-preload-081901) DBG |     </rng>
	I1018 09:46:32.695433  138443 main.go:141] libmachine: (test-preload-081901) DBG |   </devices>
	I1018 09:46:32.695442  138443 main.go:141] libmachine: (test-preload-081901) DBG | </domain>
	I1018 09:46:32.695453  138443 main.go:141] libmachine: (test-preload-081901) DBG | 
	I1018 09:46:34.266853  138443 main.go:141] libmachine: (test-preload-081901) waiting for domain to start...
	I1018 09:46:34.268277  138443 main.go:141] libmachine: (test-preload-081901) domain is now running
	I1018 09:46:34.268324  138443 main.go:141] libmachine: (test-preload-081901) waiting for IP...
	I1018 09:46:34.269210  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:34.269735  138443 main.go:141] libmachine: (test-preload-081901) found domain IP: 192.168.39.189
	I1018 09:46:34.269764  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has current primary IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:34.269773  138443 main.go:141] libmachine: (test-preload-081901) reserving static IP address...
	I1018 09:46:34.270227  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "test-preload-081901", mac: "52:54:00:fd:01:25", ip: "192.168.39.189"} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:45:26 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:34.270260  138443 main.go:141] libmachine: (test-preload-081901) reserved static IP address 192.168.39.189 for domain test-preload-081901
	I1018 09:46:34.270283  138443 main.go:141] libmachine: (test-preload-081901) DBG | skip adding static IP to network mk-test-preload-081901 - found existing host DHCP lease matching {name: "test-preload-081901", mac: "52:54:00:fd:01:25", ip: "192.168.39.189"}
	I1018 09:46:34.270298  138443 main.go:141] libmachine: (test-preload-081901) waiting for SSH...
	I1018 09:46:34.270309  138443 main.go:141] libmachine: (test-preload-081901) DBG | Getting to WaitForSSH function...
	I1018 09:46:34.272683  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:34.273095  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:45:26 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:34.273126  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:34.273301  138443 main.go:141] libmachine: (test-preload-081901) DBG | Using SSH client type: external
	I1018 09:46:34.273333  138443 main.go:141] libmachine: (test-preload-081901) DBG | Using SSH private key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa (-rw-------)
	I1018 09:46:34.273366  138443 main.go:141] libmachine: (test-preload-081901) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:46:34.273381  138443 main.go:141] libmachine: (test-preload-081901) DBG | About to run SSH command:
	I1018 09:46:34.273393  138443 main.go:141] libmachine: (test-preload-081901) DBG | exit 0
	I1018 09:46:44.499660  138443 main.go:141] libmachine: (test-preload-081901) DBG | SSH cmd err, output: exit status 255: 
	I1018 09:46:44.499710  138443 main.go:141] libmachine: (test-preload-081901) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1018 09:46:44.499720  138443 main.go:141] libmachine: (test-preload-081901) DBG | command : exit 0
	I1018 09:46:44.499725  138443 main.go:141] libmachine: (test-preload-081901) DBG | err     : exit status 255
	I1018 09:46:44.499735  138443 main.go:141] libmachine: (test-preload-081901) DBG | output  : 
	I1018 09:46:47.501825  138443 main.go:141] libmachine: (test-preload-081901) DBG | Getting to WaitForSSH function...
	I1018 09:46:47.504834  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.505235  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:47.505260  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.505415  138443 main.go:141] libmachine: (test-preload-081901) DBG | Using SSH client type: external
	I1018 09:46:47.505433  138443 main.go:141] libmachine: (test-preload-081901) DBG | Using SSH private key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa (-rw-------)
	I1018 09:46:47.505460  138443 main.go:141] libmachine: (test-preload-081901) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:46:47.505481  138443 main.go:141] libmachine: (test-preload-081901) DBG | About to run SSH command:
	I1018 09:46:47.505502  138443 main.go:141] libmachine: (test-preload-081901) DBG | exit 0
	I1018 09:46:47.639796  138443 main.go:141] libmachine: (test-preload-081901) DBG | SSH cmd err, output: <nil>: 
	I1018 09:46:47.640221  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetConfigRaw
	I1018 09:46:47.641036  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetIP
	I1018 09:46:47.643564  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.643939  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:47.643965  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.644292  138443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/config.json ...
	I1018 09:46:47.644565  138443 machine.go:93] provisionDockerMachine start ...
	I1018 09:46:47.644591  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:47.644839  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:47.647575  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.647962  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:47.647991  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.648154  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:47.648349  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.648516  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.648654  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:47.648840  138443 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:47.649192  138443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1018 09:46:47.649207  138443 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:46:47.761903  138443 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1018 09:46:47.761938  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetMachineName
	I1018 09:46:47.762221  138443 buildroot.go:166] provisioning hostname "test-preload-081901"
	I1018 09:46:47.762249  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetMachineName
	I1018 09:46:47.762471  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:47.766014  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.766442  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:47.766470  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.766649  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:47.766839  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.767007  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.767129  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:47.767307  138443 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:47.767512  138443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1018 09:46:47.767524  138443 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-081901 && echo "test-preload-081901" | sudo tee /etc/hostname
	I1018 09:46:47.901219  138443 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-081901
	
	I1018 09:46:47.901249  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:47.904514  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.904908  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:47.904933  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:47.905147  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:47.905410  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.905594  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:47.905794  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:47.905947  138443 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:47.906133  138443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1018 09:46:47.906170  138443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-081901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-081901/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-081901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:46:48.026497  138443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:46:48.026529  138443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:46:48.026549  138443 buildroot.go:174] setting up certificates
	I1018 09:46:48.026560  138443 provision.go:84] configureAuth start
	I1018 09:46:48.026569  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetMachineName
	I1018 09:46:48.026855  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetIP
	I1018 09:46:48.029630  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.029991  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.030023  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.030244  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.032910  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.033314  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.033336  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.033587  138443 provision.go:143] copyHostCerts
	I1018 09:46:48.033678  138443 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:46:48.033700  138443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:46:48.033784  138443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:46:48.033890  138443 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:46:48.033901  138443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:46:48.033936  138443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:46:48.034001  138443 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:46:48.034009  138443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:46:48.034037  138443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:46:48.034093  138443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.test-preload-081901 san=[127.0.0.1 192.168.39.189 localhost minikube test-preload-081901]
	I1018 09:46:48.260957  138443 provision.go:177] copyRemoteCerts
	I1018 09:46:48.261040  138443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:46:48.261066  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.264311  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.264769  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.264806  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.265053  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.265315  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.265527  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.265676  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:46:48.355068  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:46:48.390313  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:46:48.419356  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:46:48.447726  138443 provision.go:87] duration metric: took 421.151889ms to configureAuth
	I1018 09:46:48.447759  138443 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:46:48.447952  138443 config.go:182] Loaded profile config "test-preload-081901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:46:48.448051  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.450980  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.451342  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.451376  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.451535  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.451763  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.451978  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.452158  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.452346  138443 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:48.452538  138443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1018 09:46:48.452552  138443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:46:48.709511  138443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:46:48.709542  138443 machine.go:96] duration metric: took 1.064960307s to provisionDockerMachine
	I1018 09:46:48.709557  138443 start.go:293] postStartSetup for "test-preload-081901" (driver="kvm2")
	I1018 09:46:48.709571  138443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:46:48.709601  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:48.709956  138443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:46:48.709987  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.712863  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.713211  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.713241  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.713407  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.713587  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.713789  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.713983  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:46:48.801455  138443 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:46:48.806205  138443 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:46:48.806238  138443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:46:48.806339  138443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:46:48.806463  138443 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:46:48.806589  138443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:46:48.818473  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:46:48.847671  138443 start.go:296] duration metric: took 138.097696ms for postStartSetup
	I1018 09:46:48.847717  138443 fix.go:56] duration metric: took 16.175920508s for fixHost
	I1018 09:46:48.847744  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.850535  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.850914  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.850939  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.851118  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.851368  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.851555  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.851711  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.851893  138443 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:48.852180  138443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1018 09:46:48.852195  138443 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:46:48.964618  138443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760780808.927742996
	
	I1018 09:46:48.964638  138443 fix.go:216] guest clock: 1760780808.927742996
	I1018 09:46:48.964648  138443 fix.go:229] Guest: 2025-10-18 09:46:48.927742996 +0000 UTC Remote: 2025-10-18 09:46:48.847723507 +0000 UTC m=+26.376638450 (delta=80.019489ms)
	I1018 09:46:48.964675  138443 fix.go:200] guest clock delta is within tolerance: 80.019489ms
	I1018 09:46:48.964682  138443 start.go:83] releasing machines lock for "test-preload-081901", held for 16.292898235s
	I1018 09:46:48.964705  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:48.965065  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetIP
	I1018 09:46:48.968269  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.968702  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.968729  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.968971  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:48.969511  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:48.969823  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:46:48.969963  138443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:46:48.970012  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.970083  138443 ssh_runner.go:195] Run: cat /version.json
	I1018 09:46:48.970111  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:46:48.973087  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.973310  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.973563  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.973632  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.973751  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.973910  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:48.973930  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:48.973982  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.974106  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:46:48.974235  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.974304  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:46:48.974424  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:46:48.974633  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:46:48.974850  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:46:49.097067  138443 ssh_runner.go:195] Run: systemctl --version
	I1018 09:46:49.103323  138443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:46:49.249379  138443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:46:49.255904  138443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:46:49.255967  138443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:46:49.276072  138443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:46:49.276103  138443 start.go:495] detecting cgroup driver to use...
	I1018 09:46:49.276203  138443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:46:49.294330  138443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:46:49.311044  138443 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:46:49.311117  138443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:46:49.328268  138443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:46:49.344556  138443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:46:49.490475  138443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:46:49.711514  138443 docker.go:234] disabling docker service ...
	I1018 09:46:49.711594  138443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:46:49.727899  138443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:46:49.742847  138443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:46:49.903537  138443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:46:50.052487  138443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:46:50.068783  138443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:46:50.091151  138443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1018 09:46:50.091219  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.103880  138443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:46:50.103957  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.116437  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.128689  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.140887  138443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:46:50.153421  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.165865  138443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.186347  138443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:50.199481  138443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:46:50.210716  138443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:46:50.210776  138443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:46:50.230820  138443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:46:50.242351  138443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:50.384053  138443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:46:50.499653  138443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:46:50.499731  138443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:46:50.505267  138443 start.go:563] Will wait 60s for crictl version
	I1018 09:46:50.505332  138443 ssh_runner.go:195] Run: which crictl
	I1018 09:46:50.509537  138443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:46:50.553606  138443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:46:50.553735  138443 ssh_runner.go:195] Run: crio --version
	I1018 09:46:50.583177  138443 ssh_runner.go:195] Run: crio --version
	I1018 09:46:50.614644  138443 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1018 09:46:50.615871  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetIP
	I1018 09:46:50.618671  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:50.619092  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:46:50.619119  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:46:50.619364  138443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1018 09:46:50.623675  138443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:50.638199  138443 kubeadm.go:883] updating cluster {Name:test-preload-081901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-081901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:46:50.638303  138443 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1018 09:46:50.638345  138443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:50.684763  138443 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1018 09:46:50.684854  138443 ssh_runner.go:195] Run: which lz4
	I1018 09:46:50.689270  138443 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:46:50.693940  138443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:46:50.693980  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1018 09:46:52.155153  138443 crio.go:462] duration metric: took 1.465894189s to copy over tarball
	I1018 09:46:52.155262  138443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 09:46:53.833791  138443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.678502085s)
	I1018 09:46:53.833816  138443 crio.go:469] duration metric: took 1.678624074s to extract the tarball
	I1018 09:46:53.833824  138443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 09:46:53.873432  138443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:53.918181  138443 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:46:53.918204  138443 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:46:53.918212  138443 kubeadm.go:934] updating node { 192.168.39.189 8443 v1.32.0 crio true true} ...
	I1018 09:46:53.918339  138443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-081901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-081901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:46:53.918423  138443 ssh_runner.go:195] Run: crio config
	I1018 09:46:53.974940  138443 cni.go:84] Creating CNI manager for ""
	I1018 09:46:53.974966  138443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:46:53.974990  138443 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:46:53.975019  138443 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-081901 NodeName:test-preload-081901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:46:53.975201  138443 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-081901"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:46:53.975277  138443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1018 09:46:53.988146  138443 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:46:53.988219  138443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:46:54.000395  138443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1018 09:46:54.023857  138443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:46:54.046496  138443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 09:46:54.069817  138443 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1018 09:46:54.074098  138443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:54.090618  138443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:54.243544  138443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:46:54.294024  138443 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901 for IP: 192.168.39.189
	I1018 09:46:54.294055  138443 certs.go:195] generating shared ca certs ...
	I1018 09:46:54.294080  138443 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:54.294294  138443 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:46:54.294351  138443 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:46:54.294365  138443 certs.go:257] generating profile certs ...
	I1018 09:46:54.294468  138443 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.key
	I1018 09:46:54.294548  138443 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/apiserver.key.62dae9ef
	I1018 09:46:54.294598  138443 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/proxy-client.key
	I1018 09:46:54.294743  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:46:54.294785  138443 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:46:54.294797  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:46:54.294827  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:46:54.294855  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:46:54.294886  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:46:54.294957  138443 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:46:54.295678  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:46:54.327566  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:46:54.357713  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:46:54.387859  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:46:54.417397  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:46:54.446534  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:46:54.476154  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:46:54.505395  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:46:54.534268  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:46:54.566151  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:46:54.600107  138443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:46:54.632994  138443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:46:54.656924  138443 ssh_runner.go:195] Run: openssl version
	I1018 09:46:54.664170  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:46:54.678077  138443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:46:54.683431  138443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:46:54.683502  138443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:46:54.691244  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:46:54.705164  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:46:54.719022  138443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:54.724605  138443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:54.724665  138443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:54.732263  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:46:54.745818  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:46:54.759222  138443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:46:54.764474  138443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:46:54.764539  138443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:46:54.771998  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:46:54.785192  138443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:46:54.790689  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:46:54.798348  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:46:54.806002  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:46:54.814103  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:46:54.821253  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:46:54.828357  138443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:46:54.835298  138443 kubeadm.go:400] StartCluster: {Name:test-preload-081901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-081901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:54.835400  138443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:46:54.835455  138443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:46:54.873564  138443 cri.go:89] found id: ""
	I1018 09:46:54.873637  138443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:46:54.885689  138443 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:46:54.885709  138443 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:46:54.885755  138443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:46:54.897221  138443 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:46:54.897650  138443 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-081901" does not appear in /home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:46:54.897749  138443 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-104457/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-081901" cluster setting kubeconfig missing "test-preload-081901" context setting]
	I1018 09:46:54.897996  138443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/kubeconfig: {Name:mk43b332619cb442c058a4739a3d7e69542c9a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:54.898564  138443 kapi.go:59] client config for test-preload-081901: &rest.Config{Host:"https://192.168.39.189:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.key", CAFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:46:54.899025  138443 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:46:54.899047  138443 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:46:54.899051  138443 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:46:54.899055  138443 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:46:54.899059  138443 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:46:54.899418  138443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:46:54.910766  138443 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.189
	I1018 09:46:54.910809  138443 kubeadm.go:1160] stopping kube-system containers ...
	I1018 09:46:54.910827  138443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 09:46:54.910889  138443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:46:54.951921  138443 cri.go:89] found id: ""
	I1018 09:46:54.952016  138443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:46:54.974001  138443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:46:54.985741  138443 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:46:54.985761  138443 kubeadm.go:157] found existing configuration files:
	
	I1018 09:46:54.985806  138443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:46:54.997015  138443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:46:54.997091  138443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:46:55.008749  138443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:46:55.019511  138443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:46:55.019586  138443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:46:55.030943  138443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:46:55.041591  138443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:46:55.041672  138443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:46:55.053358  138443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:46:55.064520  138443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:46:55.064587  138443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:46:55.076397  138443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:46:55.088503  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:46:55.143387  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:46:56.372721  138443 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.229294795s)
	I1018 09:46:56.372792  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:46:56.612632  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:46:56.683524  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:46:56.756728  138443 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:46:56.756836  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:57.257320  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:57.757403  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:58.257418  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:58.756989  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:59.257128  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:59.281274  138443 api_server.go:72] duration metric: took 2.524561596s to wait for apiserver process to appear ...
	I1018 09:46:59.281311  138443 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:46:59.281336  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:46:59.281974  138443 api_server.go:269] stopped: https://192.168.39.189:8443/healthz: Get "https://192.168.39.189:8443/healthz": dial tcp 192.168.39.189:8443: connect: connection refused
	I1018 09:46:59.781880  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:01.543522  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:47:01.543556  138443 api_server.go:103] status: https://192.168.39.189:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:47:01.543574  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:01.569073  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:47:01.569108  138443 api_server.go:103] status: https://192.168.39.189:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:47:01.781491  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:01.800909  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:47:01.800949  138443 api_server.go:103] status: https://192.168.39.189:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:47:02.281564  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:02.287090  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:47:02.287126  138443 api_server.go:103] status: https://192.168.39.189:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:47:02.781739  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:02.786388  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I1018 09:47:02.793835  138443 api_server.go:141] control plane version: v1.32.0
	I1018 09:47:02.793887  138443 api_server.go:131] duration metric: took 3.512568133s to wait for apiserver health ...
	I1018 09:47:02.793897  138443 cni.go:84] Creating CNI manager for ""
	I1018 09:47:02.793904  138443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:47:02.795656  138443 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:47:02.797018  138443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1018 09:47:02.815371  138443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1018 09:47:02.842302  138443 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:47:02.852333  138443 system_pods.go:59] 7 kube-system pods found
	I1018 09:47:02.852369  138443 system_pods.go:61] "coredns-668d6bf9bc-9bx7z" [185d6fca-ef26-409c-b0e2-bee25d2af498] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:47:02.852376  138443 system_pods.go:61] "etcd-test-preload-081901" [903ccc50-3d2c-46ef-a9ba-75f9a3d72927] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:47:02.852384  138443 system_pods.go:61] "kube-apiserver-test-preload-081901" [51baf0e4-da81-4013-a403-908223d018fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:47:02.852390  138443 system_pods.go:61] "kube-controller-manager-test-preload-081901" [059b7e3f-c4cc-4ba1-8e97-2fc5d0da98e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:47:02.852396  138443 system_pods.go:61] "kube-proxy-kmfrn" [522b61c8-23af-46a1-8545-042583e7d106] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:47:02.852401  138443 system_pods.go:61] "kube-scheduler-test-preload-081901" [292e5331-a86d-42da-9c14-be9eb966019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:47:02.852406  138443 system_pods.go:61] "storage-provisioner" [81da1e01-a762-4cd8-80b7-196d375b6208] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:47:02.852413  138443 system_pods.go:74] duration metric: took 10.086042ms to wait for pod list to return data ...
	I1018 09:47:02.852421  138443 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:47:02.863864  138443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:47:02.863896  138443 node_conditions.go:123] node cpu capacity is 2
	I1018 09:47:02.863907  138443 node_conditions.go:105] duration metric: took 11.48182ms to run NodePressure ...
	I1018 09:47:02.863977  138443 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:47:03.198201  138443 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 09:47:03.202213  138443 kubeadm.go:743] kubelet initialised
	I1018 09:47:03.202237  138443 kubeadm.go:744] duration metric: took 4.006019ms waiting for restarted kubelet to initialise ...
	I1018 09:47:03.202252  138443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:47:03.219571  138443 ops.go:34] apiserver oom_adj: -16
	I1018 09:47:03.219600  138443 kubeadm.go:601] duration metric: took 8.333885158s to restartPrimaryControlPlane
	I1018 09:47:03.219609  138443 kubeadm.go:402] duration metric: took 8.384319547s to StartCluster
	I1018 09:47:03.219628  138443 settings.go:142] acquiring lock: {Name:mk3a2bfd7987fbaaa6a53ab72c677b4cd8c4a8ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:47:03.219729  138443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:47:03.220347  138443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/kubeconfig: {Name:mk43b332619cb442c058a4739a3d7e69542c9a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:47:03.220608  138443 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:47:03.220690  138443 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:47:03.220776  138443 addons.go:69] Setting storage-provisioner=true in profile "test-preload-081901"
	I1018 09:47:03.220797  138443 addons.go:238] Setting addon storage-provisioner=true in "test-preload-081901"
	W1018 09:47:03.220807  138443 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:47:03.220807  138443 config.go:182] Loaded profile config "test-preload-081901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1018 09:47:03.220817  138443 addons.go:69] Setting default-storageclass=true in profile "test-preload-081901"
	I1018 09:47:03.220842  138443 host.go:66] Checking if "test-preload-081901" exists ...
	I1018 09:47:03.220842  138443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-081901"
	I1018 09:47:03.221149  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:47:03.221187  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:47:03.221279  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:47:03.221319  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:47:03.223127  138443 out.go:179] * Verifying Kubernetes components...
	I1018 09:47:03.224505  138443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:47:03.235054  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I1018 09:47:03.235342  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I1018 09:47:03.235587  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:47:03.235906  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:47:03.236055  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:47:03.236080  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:47:03.236374  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:47:03.236394  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:47:03.236446  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:47:03.236735  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:47:03.236926  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetState
	I1018 09:47:03.236977  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:47:03.237051  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:47:03.239333  138443 kapi.go:59] client config for test-preload-081901: &rest.Config{Host:"https://192.168.39.189:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.key", CAFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:47:03.239676  138443 addons.go:238] Setting addon default-storageclass=true in "test-preload-081901"
	W1018 09:47:03.239696  138443 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:47:03.239720  138443 host.go:66] Checking if "test-preload-081901" exists ...
	I1018 09:47:03.239971  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:47:03.240011  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:47:03.251752  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41041
	I1018 09:47:03.252472  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:47:03.253163  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:47:03.253193  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:47:03.253601  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:47:03.253849  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetState
	I1018 09:47:03.254215  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1018 09:47:03.254643  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:47:03.255039  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:47:03.255061  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:47:03.255486  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:47:03.256001  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:47:03.256096  138443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:47:03.256170  138443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:47:03.260280  138443 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:47:03.261702  138443 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:47:03.261724  138443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:47:03.261748  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:47:03.265222  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:47:03.265803  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:47:03.265839  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:47:03.266169  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:47:03.266462  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:47:03.266626  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:47:03.266791  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:47:03.271750  138443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1018 09:47:03.272230  138443 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:47:03.272929  138443 main.go:141] libmachine: Using API Version  1
	I1018 09:47:03.272952  138443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:47:03.273381  138443 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:47:03.273607  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetState
	I1018 09:47:03.275767  138443 main.go:141] libmachine: (test-preload-081901) Calling .DriverName
	I1018 09:47:03.276066  138443 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:47:03.276082  138443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:47:03.276100  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHHostname
	I1018 09:47:03.279706  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:47:03.280070  138443 main.go:141] libmachine: (test-preload-081901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:01:25", ip: ""} in network mk-test-preload-081901: {Iface:virbr1 ExpiryTime:2025-10-18 10:46:44 +0000 UTC Type:0 Mac:52:54:00:fd:01:25 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:test-preload-081901 Clientid:01:52:54:00:fd:01:25}
	I1018 09:47:03.280104  138443 main.go:141] libmachine: (test-preload-081901) DBG | domain test-preload-081901 has defined IP address 192.168.39.189 and MAC address 52:54:00:fd:01:25 in network mk-test-preload-081901
	I1018 09:47:03.280286  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHPort
	I1018 09:47:03.280520  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHKeyPath
	I1018 09:47:03.280719  138443 main.go:141] libmachine: (test-preload-081901) Calling .GetSSHUsername
	I1018 09:47:03.280874  138443 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/test-preload-081901/id_rsa Username:docker}
	I1018 09:47:03.432228  138443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:47:03.450731  138443 node_ready.go:35] waiting up to 6m0s for node "test-preload-081901" to be "Ready" ...
	I1018 09:47:03.453603  138443 node_ready.go:49] node "test-preload-081901" is "Ready"
	I1018 09:47:03.453646  138443 node_ready.go:38] duration metric: took 2.861688ms for node "test-preload-081901" to be "Ready" ...
	I1018 09:47:03.453660  138443 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:47:03.453715  138443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:47:03.473250  138443 api_server.go:72] duration metric: took 252.605229ms to wait for apiserver process to appear ...
	I1018 09:47:03.473274  138443 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:47:03.473291  138443 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1018 09:47:03.477627  138443 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I1018 09:47:03.478775  138443 api_server.go:141] control plane version: v1.32.0
	I1018 09:47:03.478798  138443 api_server.go:131] duration metric: took 5.518235ms to wait for apiserver health ...
	I1018 09:47:03.478807  138443 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:47:03.481975  138443 system_pods.go:59] 7 kube-system pods found
	I1018 09:47:03.482000  138443 system_pods.go:61] "coredns-668d6bf9bc-9bx7z" [185d6fca-ef26-409c-b0e2-bee25d2af498] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:47:03.482006  138443 system_pods.go:61] "etcd-test-preload-081901" [903ccc50-3d2c-46ef-a9ba-75f9a3d72927] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:47:03.482016  138443 system_pods.go:61] "kube-apiserver-test-preload-081901" [51baf0e4-da81-4013-a403-908223d018fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:47:03.482021  138443 system_pods.go:61] "kube-controller-manager-test-preload-081901" [059b7e3f-c4cc-4ba1-8e97-2fc5d0da98e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:47:03.482026  138443 system_pods.go:61] "kube-proxy-kmfrn" [522b61c8-23af-46a1-8545-042583e7d106] Running
	I1018 09:47:03.482034  138443 system_pods.go:61] "kube-scheduler-test-preload-081901" [292e5331-a86d-42da-9c14-be9eb966019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:47:03.482042  138443 system_pods.go:61] "storage-provisioner" [81da1e01-a762-4cd8-80b7-196d375b6208] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:47:03.482050  138443 system_pods.go:74] duration metric: took 3.237622ms to wait for pod list to return data ...
	I1018 09:47:03.482063  138443 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:47:03.484687  138443 default_sa.go:45] found service account: "default"
	I1018 09:47:03.484706  138443 default_sa.go:55] duration metric: took 2.636431ms for default service account to be created ...
	I1018 09:47:03.484714  138443 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:47:03.487053  138443 system_pods.go:86] 7 kube-system pods found
	I1018 09:47:03.487080  138443 system_pods.go:89] "coredns-668d6bf9bc-9bx7z" [185d6fca-ef26-409c-b0e2-bee25d2af498] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:47:03.487088  138443 system_pods.go:89] "etcd-test-preload-081901" [903ccc50-3d2c-46ef-a9ba-75f9a3d72927] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:47:03.487096  138443 system_pods.go:89] "kube-apiserver-test-preload-081901" [51baf0e4-da81-4013-a403-908223d018fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:47:03.487103  138443 system_pods.go:89] "kube-controller-manager-test-preload-081901" [059b7e3f-c4cc-4ba1-8e97-2fc5d0da98e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:47:03.487110  138443 system_pods.go:89] "kube-proxy-kmfrn" [522b61c8-23af-46a1-8545-042583e7d106] Running
	I1018 09:47:03.487120  138443 system_pods.go:89] "kube-scheduler-test-preload-081901" [292e5331-a86d-42da-9c14-be9eb966019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:47:03.487151  138443 system_pods.go:89] "storage-provisioner" [81da1e01-a762-4cd8-80b7-196d375b6208] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:47:03.487167  138443 system_pods.go:126] duration metric: took 2.445591ms to wait for k8s-apps to be running ...
	I1018 09:47:03.487177  138443 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:47:03.487225  138443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:47:03.509496  138443 system_svc.go:56] duration metric: took 22.309549ms WaitForService to wait for kubelet
	I1018 09:47:03.509524  138443 kubeadm.go:586] duration metric: took 288.88866ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:47:03.509544  138443 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:47:03.515635  138443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1018 09:47:03.515657  138443 node_conditions.go:123] node cpu capacity is 2
	I1018 09:47:03.515669  138443 node_conditions.go:105] duration metric: took 6.12086ms to run NodePressure ...
	I1018 09:47:03.515681  138443 start.go:241] waiting for startup goroutines ...
	I1018 09:47:03.520813  138443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:47:03.535506  138443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:47:04.313131  138443 main.go:141] libmachine: Making call to close driver server
	I1018 09:47:04.313160  138443 main.go:141] libmachine: Making call to close driver server
	I1018 09:47:04.313181  138443 main.go:141] libmachine: (test-preload-081901) Calling .Close
	I1018 09:47:04.313170  138443 main.go:141] libmachine: (test-preload-081901) Calling .Close
	I1018 09:47:04.313512  138443 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:47:04.313519  138443 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:47:04.313520  138443 main.go:141] libmachine: (test-preload-081901) DBG | Closing plugin on server side
	I1018 09:47:04.313528  138443 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:47:04.313534  138443 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:47:04.313545  138443 main.go:141] libmachine: Making call to close driver server
	I1018 09:47:04.313554  138443 main.go:141] libmachine: (test-preload-081901) Calling .Close
	I1018 09:47:04.313562  138443 main.go:141] libmachine: (test-preload-081901) DBG | Closing plugin on server side
	I1018 09:47:04.313546  138443 main.go:141] libmachine: Making call to close driver server
	I1018 09:47:04.313606  138443 main.go:141] libmachine: (test-preload-081901) Calling .Close
	I1018 09:47:04.313814  138443 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:47:04.313828  138443 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:47:04.313887  138443 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:47:04.313903  138443 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:47:04.319864  138443 main.go:141] libmachine: Making call to close driver server
	I1018 09:47:04.319885  138443 main.go:141] libmachine: (test-preload-081901) Calling .Close
	I1018 09:47:04.320170  138443 main.go:141] libmachine: Successfully made call to close driver server
	I1018 09:47:04.320188  138443 main.go:141] libmachine: (test-preload-081901) DBG | Closing plugin on server side
	I1018 09:47:04.320190  138443 main.go:141] libmachine: Making call to close connection to plugin binary
	I1018 09:47:04.322028  138443 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:47:04.323473  138443 addons.go:514] duration metric: took 1.102789804s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:47:04.323522  138443 start.go:246] waiting for cluster config update ...
	I1018 09:47:04.323544  138443 start.go:255] writing updated cluster config ...
	I1018 09:47:04.323827  138443 ssh_runner.go:195] Run: rm -f paused
	I1018 09:47:04.329229  138443 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:47:04.329746  138443 kapi.go:59] client config for test-preload-081901: &rest.Config{Host:"https://192.168.39.189:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/profiles/test-preload-081901/client.key", CAFile:"/home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:47:04.333804  138443 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-9bx7z" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:47:06.340545  138443 pod_ready.go:104] pod "coredns-668d6bf9bc-9bx7z" is not "Ready", error: <nil>
	W1018 09:47:08.840184  138443 pod_ready.go:104] pod "coredns-668d6bf9bc-9bx7z" is not "Ready", error: <nil>
	W1018 09:47:11.339674  138443 pod_ready.go:104] pod "coredns-668d6bf9bc-9bx7z" is not "Ready", error: <nil>
	W1018 09:47:13.839581  138443 pod_ready.go:104] pod "coredns-668d6bf9bc-9bx7z" is not "Ready", error: <nil>
	I1018 09:47:14.339116  138443 pod_ready.go:94] pod "coredns-668d6bf9bc-9bx7z" is "Ready"
	I1018 09:47:14.339176  138443 pod_ready.go:86] duration metric: took 10.005344475s for pod "coredns-668d6bf9bc-9bx7z" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:14.342319  138443 pod_ready.go:83] waiting for pod "etcd-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:47:16.348468  138443 pod_ready.go:104] pod "etcd-test-preload-081901" is not "Ready", error: <nil>
	I1018 09:47:16.848289  138443 pod_ready.go:94] pod "etcd-test-preload-081901" is "Ready"
	I1018 09:47:16.848327  138443 pod_ready.go:86] duration metric: took 2.505971255s for pod "etcd-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:16.851166  138443 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:16.856316  138443 pod_ready.go:94] pod "kube-apiserver-test-preload-081901" is "Ready"
	I1018 09:47:16.856350  138443 pod_ready.go:86] duration metric: took 5.153768ms for pod "kube-apiserver-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:16.858734  138443 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:16.863085  138443 pod_ready.go:94] pod "kube-controller-manager-test-preload-081901" is "Ready"
	I1018 09:47:16.863105  138443 pod_ready.go:86] duration metric: took 4.346636ms for pod "kube-controller-manager-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:16.865630  138443 pod_ready.go:83] waiting for pod "kube-proxy-kmfrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:17.138485  138443 pod_ready.go:94] pod "kube-proxy-kmfrn" is "Ready"
	I1018 09:47:17.138521  138443 pod_ready.go:86] duration metric: took 272.870948ms for pod "kube-proxy-kmfrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:17.338247  138443 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:18.937745  138443 pod_ready.go:94] pod "kube-scheduler-test-preload-081901" is "Ready"
	I1018 09:47:18.937777  138443 pod_ready.go:86] duration metric: took 1.599498629s for pod "kube-scheduler-test-preload-081901" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:47:18.937792  138443 pod_ready.go:40] duration metric: took 14.608524471s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:47:18.980533  138443 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1018 09:47:18.982403  138443 out.go:203] 
	W1018 09:47:18.983857  138443 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1018 09:47:18.985128  138443 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:47:18.986454  138443 out.go:179] * Done! kubectl is now configured to use "test-preload-081901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.908587915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780839908568078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86e7d0fa-e047-4113-a827-aa52805c25ec name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.909234849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3cbc5cb-aa29-46c4-884d-65e13e5ab749 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.909327993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3cbc5cb-aa29-46c4-884d-65e13e5ab749 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.909533820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f92bdc808fb473b44d4ead953534f0b326c81c956b0f0a61b0ee5f2afa35ba9e,PodSandboxId:c8096836ac9ca106ec0259eb9d6080de24909fcca5d503f1cd83ffa5f7428a29,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760780825764738993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9bx7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185d6fca-ef26-409c-b0e2-bee25d2af498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07202f42ca5417662799bde83c79a074ebb99f93c46a1ce63d592748eaac7c96,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760780822866352898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 81da1e01-a762-4cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f4008de893857c575249fc135fed2731b8b12e1daa3ee00b4cde56bb062594,PodSandboxId:345b28b81e1073f9f3c8ea7217426a750eeb5ef7c92ceb51eddb8816c72de7d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760780822125044740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmfrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2b61c8-23af-46a1-8545-042583e7d106,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760780822128357812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81da1e01-a762-4
cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199ba545cdff15bd2e09e8ecd3dbd373a8b03d3034a703ccd095e0cfecf2b497,PodSandboxId:0747ccbb5438bb81b23ae0394483c7207c91292829b065f61f9911bc33eaf0e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760780818929601763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 9de3601ae716d8e30affbe5ef7734142,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809b70fb756ef9c4588f2e119af09160ed7bd901e88d0f7910c101bf614af8ea,PodSandboxId:df3d31bc74dfd6445188d98d2e943a14fe66a9ac834f207f091f747ea9bdbd43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760780818921087039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ad24
ba333935368ee9f86fc7f62fcc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f629de83ac18aca20fa2547424e3fec5fc449418308df90372c95419c9e553,PodSandboxId:80c49e739f8f08049a1446ddfcd9294fcde2ed5ed5c6f3d8902dab2845b81dfe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760780818899429752,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188b56838546df0465ea350465b7997c,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a6bef7be89877ee86a1d31a5e681864d04d9807bf44dfb0beb9cf72c502543,PodSandboxId:da65c0e3d4b1c5d978bec33ee911bd5dc723db15659de812d6e7cb1ab40d833f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760780818893119313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e93cadf3dd66e8acafcdda1a3cb4c82,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3cbc5cb-aa29-46c4-884d-65e13e5ab749 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.949265119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f62e24a6-be9b-400d-88ea-9b1fe414bbc6 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.949352228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f62e24a6-be9b-400d-88ea-9b1fe414bbc6 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.950360697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d321eba8-36ad-4fa9-abf0-9121a0585c7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.950868672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780839950844173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d321eba8-36ad-4fa9-abf0-9121a0585c7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.951614449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=584b9565-6575-4c3f-a7a4-2260f422f4d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.951750727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=584b9565-6575-4c3f-a7a4-2260f422f4d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.952158428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f92bdc808fb473b44d4ead953534f0b326c81c956b0f0a61b0ee5f2afa35ba9e,PodSandboxId:c8096836ac9ca106ec0259eb9d6080de24909fcca5d503f1cd83ffa5f7428a29,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760780825764738993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9bx7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185d6fca-ef26-409c-b0e2-bee25d2af498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07202f42ca5417662799bde83c79a074ebb99f93c46a1ce63d592748eaac7c96,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760780822866352898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 81da1e01-a762-4cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f4008de893857c575249fc135fed2731b8b12e1daa3ee00b4cde56bb062594,PodSandboxId:345b28b81e1073f9f3c8ea7217426a750eeb5ef7c92ceb51eddb8816c72de7d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760780822125044740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmfrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2b61c8-23af-46a1-8545-042583e7d106,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760780822128357812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81da1e01-a762-4
cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199ba545cdff15bd2e09e8ecd3dbd373a8b03d3034a703ccd095e0cfecf2b497,PodSandboxId:0747ccbb5438bb81b23ae0394483c7207c91292829b065f61f9911bc33eaf0e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760780818929601763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 9de3601ae716d8e30affbe5ef7734142,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809b70fb756ef9c4588f2e119af09160ed7bd901e88d0f7910c101bf614af8ea,PodSandboxId:df3d31bc74dfd6445188d98d2e943a14fe66a9ac834f207f091f747ea9bdbd43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760780818921087039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ad24
ba333935368ee9f86fc7f62fcc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f629de83ac18aca20fa2547424e3fec5fc449418308df90372c95419c9e553,PodSandboxId:80c49e739f8f08049a1446ddfcd9294fcde2ed5ed5c6f3d8902dab2845b81dfe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760780818899429752,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188b56838546df0465ea350465b7997c,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a6bef7be89877ee86a1d31a5e681864d04d9807bf44dfb0beb9cf72c502543,PodSandboxId:da65c0e3d4b1c5d978bec33ee911bd5dc723db15659de812d6e7cb1ab40d833f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760780818893119313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e93cadf3dd66e8acafcdda1a3cb4c82,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=584b9565-6575-4c3f-a7a4-2260f422f4d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.991092585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=732893c0-c806-43bf-a480-e6aeb6bc2724 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.991169714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=732893c0-c806-43bf-a480-e6aeb6bc2724 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.993150041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3e1ded5-5b93-4f76-bc60-a993f941543f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.993875744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780839993850841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3e1ded5-5b93-4f76-bc60-a993f941543f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.994612843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=229e12d5-6873-4d7a-b100-4f9563d4a8c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.994959513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=229e12d5-6873-4d7a-b100-4f9563d4a8c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:19 test-preload-081901 crio[830]: time="2025-10-18 09:47:19.995183519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f92bdc808fb473b44d4ead953534f0b326c81c956b0f0a61b0ee5f2afa35ba9e,PodSandboxId:c8096836ac9ca106ec0259eb9d6080de24909fcca5d503f1cd83ffa5f7428a29,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760780825764738993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9bx7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185d6fca-ef26-409c-b0e2-bee25d2af498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07202f42ca5417662799bde83c79a074ebb99f93c46a1ce63d592748eaac7c96,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760780822866352898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 81da1e01-a762-4cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f4008de893857c575249fc135fed2731b8b12e1daa3ee00b4cde56bb062594,PodSandboxId:345b28b81e1073f9f3c8ea7217426a750eeb5ef7c92ceb51eddb8816c72de7d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760780822125044740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmfrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2b61c8-23af-46a1-8545-042583e7d106,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760780822128357812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81da1e01-a762-4
cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199ba545cdff15bd2e09e8ecd3dbd373a8b03d3034a703ccd095e0cfecf2b497,PodSandboxId:0747ccbb5438bb81b23ae0394483c7207c91292829b065f61f9911bc33eaf0e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760780818929601763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 9de3601ae716d8e30affbe5ef7734142,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809b70fb756ef9c4588f2e119af09160ed7bd901e88d0f7910c101bf614af8ea,PodSandboxId:df3d31bc74dfd6445188d98d2e943a14fe66a9ac834f207f091f747ea9bdbd43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760780818921087039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ad24
ba333935368ee9f86fc7f62fcc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f629de83ac18aca20fa2547424e3fec5fc449418308df90372c95419c9e553,PodSandboxId:80c49e739f8f08049a1446ddfcd9294fcde2ed5ed5c6f3d8902dab2845b81dfe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760780818899429752,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188b56838546df0465ea350465b7997c,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a6bef7be89877ee86a1d31a5e681864d04d9807bf44dfb0beb9cf72c502543,PodSandboxId:da65c0e3d4b1c5d978bec33ee911bd5dc723db15659de812d6e7cb1ab40d833f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760780818893119313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e93cadf3dd66e8acafcdda1a3cb4c82,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=229e12d5-6873-4d7a-b100-4f9563d4a8c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.032343291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e12ffbd7-eb87-4e0a-ad4c-fe46cc8dc675 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.032505537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e12ffbd7-eb87-4e0a-ad4c-fe46cc8dc675 name=/runtime.v1.RuntimeService/Version
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.033837459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0445d054-c59d-4441-9b34-04ddaeea3980 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.034401990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780840034377009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0445d054-c59d-4441-9b34-04ddaeea3980 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.035148931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14f5d49a-df6f-43fe-9303-413e7704f0f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.035216081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14f5d49a-df6f-43fe-9303-413e7704f0f4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 18 09:47:20 test-preload-081901 crio[830]: time="2025-10-18 09:47:20.035483109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f92bdc808fb473b44d4ead953534f0b326c81c956b0f0a61b0ee5f2afa35ba9e,PodSandboxId:c8096836ac9ca106ec0259eb9d6080de24909fcca5d503f1cd83ffa5f7428a29,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760780825764738993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9bx7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185d6fca-ef26-409c-b0e2-bee25d2af498,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07202f42ca5417662799bde83c79a074ebb99f93c46a1ce63d592748eaac7c96,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760780822866352898,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 81da1e01-a762-4cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f4008de893857c575249fc135fed2731b8b12e1daa3ee00b4cde56bb062594,PodSandboxId:345b28b81e1073f9f3c8ea7217426a750eeb5ef7c92ceb51eddb8816c72de7d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760780822125044740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kmfrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2b61c8-23af-46a1-8545-042583e7d106,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7,PodSandboxId:a0e0ecadf3a6bfd8f112b3a1f3b04087e0db10c3a4fb29f242dada73d80649b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760780822128357812,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81da1e01-a762-4
cd8-80b7-196d375b6208,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199ba545cdff15bd2e09e8ecd3dbd373a8b03d3034a703ccd095e0cfecf2b497,PodSandboxId:0747ccbb5438bb81b23ae0394483c7207c91292829b065f61f9911bc33eaf0e3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760780818929601763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 9de3601ae716d8e30affbe5ef7734142,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809b70fb756ef9c4588f2e119af09160ed7bd901e88d0f7910c101bf614af8ea,PodSandboxId:df3d31bc74dfd6445188d98d2e943a14fe66a9ac834f207f091f747ea9bdbd43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760780818921087039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ad24
ba333935368ee9f86fc7f62fcc,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f629de83ac18aca20fa2547424e3fec5fc449418308df90372c95419c9e553,PodSandboxId:80c49e739f8f08049a1446ddfcd9294fcde2ed5ed5c6f3d8902dab2845b81dfe,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760780818899429752,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 188b56838546df0465ea350465b7997c,},Annotations:
map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a6bef7be89877ee86a1d31a5e681864d04d9807bf44dfb0beb9cf72c502543,PodSandboxId:da65c0e3d4b1c5d978bec33ee911bd5dc723db15659de812d6e7cb1ab40d833f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760780818893119313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-081901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e93cadf3dd66e8acafcdda1a3cb4c82,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14f5d49a-df6f-43fe-9303-413e7704f0f4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f92bdc808fb47       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Running             coredns                   1                   c8096836ac9ca       coredns-668d6bf9bc-9bx7z
	07202f42ca541       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       2                   a0e0ecadf3a6b       storage-provisioner
	ed7f9e7caee11       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Exited              storage-provisioner       1                   a0e0ecadf3a6b       storage-provisioner
	14f4008de8938       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   345b28b81e107       kube-proxy-kmfrn
	199ba545cdff1       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   0747ccbb5438b       kube-controller-manager-test-preload-081901
	809b70fb756ef       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   df3d31bc74dfd       kube-apiserver-test-preload-081901
	24f629de83ac1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   80c49e739f8f0       etcd-test-preload-081901
	f5a6bef7be898       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   da65c0e3d4b1c       kube-scheduler-test-preload-081901
	
	
	==> coredns [f92bdc808fb473b44d4ead953534f0b326c81c956b0f0a61b0ee5f2afa35ba9e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48462 - 4421 "HINFO IN 3271328231339760666.4093577282986830024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.076007353s
	
	
	==> describe nodes <==
	Name:               test-preload-081901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-081901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=test-preload-081901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_46_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-081901
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:47:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:47:03 +0000   Sat, 18 Oct 2025 09:45:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:47:03 +0000   Sat, 18 Oct 2025 09:45:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:47:03 +0000   Sat, 18 Oct 2025 09:45:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:47:03 +0000   Sat, 18 Oct 2025 09:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    test-preload-081901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f3b7452dca9471a8842c5b690e04765
	  System UUID:                0f3b7452-dca9-471a-8842-c5b690e04765
	  Boot ID:                    b1eb45f0-a832-4962-a7ae-adb955ee4149
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9bx7z                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     76s
	  kube-system                 etcd-test-preload-081901                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         80s
	  kube-system                 kube-apiserver-test-preload-081901             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-test-preload-081901    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-kmfrn                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-test-preload-081901             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node test-preload-081901 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node test-preload-081901 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node test-preload-081901 status is now: NodeHasSufficientPID
	  Normal   NodeReady                80s                kubelet          Node test-preload-081901 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           77s                node-controller  Node test-preload-081901 event: Registered Node test-preload-081901 in Controller
	  Normal   Starting                 24s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node test-preload-081901 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node test-preload-081901 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node test-preload-081901 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 19s                kubelet          Node test-preload-081901 has been rebooted, boot id: b1eb45f0-a832-4962-a7ae-adb955ee4149
	  Normal   RegisteredNode           16s                node-controller  Node test-preload-081901 event: Registered Node test-preload-081901 in Controller
	
	
	==> dmesg <==
	[Oct18 09:46] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002854] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.025286] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084620] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.095797] kauditd_printk_skb: 102 callbacks suppressed
	[Oct18 09:47] kauditd_printk_skb: 177 callbacks suppressed
	[  +8.578735] kauditd_printk_skb: 212 callbacks suppressed
	
	
	==> etcd [24f629de83ac18aca20fa2547424e3fec5fc449418308df90372c95419c9e553] <==
	{"level":"info","ts":"2025-10-18T09:46:59.314708Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:46:59.320152Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:46:59.322760Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:46:59.322773Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:46:59.327495Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:46:59.327719Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2025-10-18T09:46:59.330830Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2025-10-18T09:46:59.331076Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"6fb28b9aae66857a","initial-advertise-peer-urls":["https://192.168.39.189:2380"],"listen-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:46:59.331126Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:47:00.379844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:47:00.379895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:47:00.379927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgPreVoteResp from 6fb28b9aae66857a at term 2"}
	{"level":"info","ts":"2025-10-18T09:47:00.379940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:47:00.379945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgVoteResp from 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2025-10-18T09:47:00.379953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:47:00.379960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fb28b9aae66857a elected leader 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2025-10-18T09:47:00.382552Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"6fb28b9aae66857a","local-member-attributes":"{Name:test-preload-081901 ClientURLs:[https://192.168.39.189:2379]}","request-path":"/0/members/6fb28b9aae66857a/attributes","cluster-id":"f0bdb053fd9e03ec","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:47:00.382565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:47:00.383492Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T09:47:00.383700Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:47:00.384206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	{"level":"info","ts":"2025-10-18T09:47:00.384880Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T09:47:00.385423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:47:00.385761Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:47:00.385802Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:47:20 up 0 min,  0 users,  load average: 0.82, 0.23, 0.08
	Linux test-preload-081901 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [809b70fb756ef9c4588f2e119af09160ed7bd901e88d0f7910c101bf614af8ea] <==
	I1018 09:47:01.593340       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:47:01.593369       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:47:01.593742       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1018 09:47:01.594745       1 shared_informer.go:320] Caches are synced for configmaps
	I1018 09:47:01.594802       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:47:01.594867       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:47:01.595227       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1018 09:47:01.595268       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:47:01.595275       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:47:01.595279       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:47:01.595283       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:47:01.600596       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:47:01.624775       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1018 09:47:01.642333       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1018 09:47:01.642384       1 policy_source.go:240] refreshing policies
	I1018 09:47:01.654093       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:47:01.761746       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1018 09:47:02.500104       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:47:03.047909       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1018 09:47:03.105566       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1018 09:47:03.135545       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:47:03.142593       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:47:04.800831       1 controller.go:615] quota admission added evaluator for: endpoints
	I1018 09:47:05.100133       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1018 09:47:05.200622       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [199ba545cdff15bd2e09e8ecd3dbd373a8b03d3034a703ccd095e0cfecf2b497] <==
	I1018 09:47:04.801006       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1018 09:47:04.801290       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1018 09:47:04.803208       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 09:47:04.803264       1 shared_informer.go:320] Caches are synced for node
	I1018 09:47:04.803951       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:47:04.803978       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:47:04.803984       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1018 09:47:04.803990       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1018 09:47:04.804075       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-081901"
	I1018 09:47:04.808730       1 shared_informer.go:320] Caches are synced for service account
	I1018 09:47:04.808933       1 shared_informer.go:320] Caches are synced for taint
	I1018 09:47:04.809250       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:47:04.809401       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-081901"
	I1018 09:47:04.809469       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:47:04.815089       1 shared_informer.go:320] Caches are synced for attach detach
	I1018 09:47:04.819276       1 shared_informer.go:320] Caches are synced for resource quota
	I1018 09:47:04.839814       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 09:47:04.845743       1 shared_informer.go:320] Caches are synced for garbage collector
	I1018 09:47:04.845853       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:47:04.845863       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:47:05.106955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="308.816933ms"
	I1018 09:47:05.107104       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="42.939µs"
	I1018 09:47:05.878651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.517µs"
	I1018 09:47:14.175886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.439051ms"
	I1018 09:47:14.176249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="191.32µs"
	
	
	==> kube-proxy [14f4008de893857c575249fc135fed2731b8b12e1daa3ee00b4cde56bb062594] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1018 09:47:02.329072       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1018 09:47:02.338445       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	E1018 09:47:02.338532       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:47:02.371733       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1018 09:47:02.371836       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1018 09:47:02.371861       1 server_linux.go:170] "Using iptables Proxier"
	I1018 09:47:02.374499       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:47:02.375337       1 server.go:497] "Version info" version="v1.32.0"
	I1018 09:47:02.375370       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:47:02.378130       1 config.go:199] "Starting service config controller"
	I1018 09:47:02.378177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1018 09:47:02.378223       1 config.go:105] "Starting endpoint slice config controller"
	I1018 09:47:02.378243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1018 09:47:02.380119       1 config.go:329] "Starting node config controller"
	I1018 09:47:02.380142       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1018 09:47:02.478788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1018 09:47:02.478826       1 shared_informer.go:320] Caches are synced for service config
	I1018 09:47:02.480390       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f5a6bef7be89877ee86a1d31a5e681864d04d9807bf44dfb0beb9cf72c502543] <==
	I1018 09:46:59.689393       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:47:01.548120       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:47:01.549270       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:47:01.549563       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:47:01.549602       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:47:01.573877       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1018 09:47:01.573932       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:47:01.576232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:47:01.576279       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 09:47:01.576879       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1018 09:47:01.576936       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:47:01.676489       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.751191    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/522b61c8-23af-46a1-8545-042583e7d106-xtables-lock\") pod \"kube-proxy-kmfrn\" (UID: \"522b61c8-23af-46a1-8545-042583e7d106\") " pod="kube-system/kube-proxy-kmfrn"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.751209    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/81da1e01-a762-4cd8-80b7-196d375b6208-tmp\") pod \"storage-provisioner\" (UID: \"81da1e01-a762-4cd8-80b7-196d375b6208\") " pod="kube-system/storage-provisioner"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: E1018 09:47:01.751282    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: E1018 09:47:01.751343    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume podName:185d6fca-ef26-409c-b0e2-bee25d2af498 nodeName:}" failed. No retries permitted until 2025-10-18 09:47:02.251322499 +0000 UTC m=+5.664625873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume") pod "coredns-668d6bf9bc-9bx7z" (UID: "185d6fca-ef26-409c-b0e2-bee25d2af498") : object "kube-system"/"coredns" not registered
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.753005    1153 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.753121    1153 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.753144    1153 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.754361    1153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.755848    1153 setters.go:602] "Node became not ready" node="test-preload-081901" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-18T09:47:01Z","lastTransitionTime":"2025-10-18T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: E1018 09:47:01.757851    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-081901\" already exists" pod="kube-system/kube-apiserver-test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.757947    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: E1018 09:47:01.791712    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-081901\" already exists" pod="kube-system/kube-controller-manager-test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: I1018 09:47:01.791751    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-081901"
	Oct 18 09:47:01 test-preload-081901 kubelet[1153]: E1018 09:47:01.805883    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-081901\" already exists" pod="kube-system/kube-scheduler-test-preload-081901"
	Oct 18 09:47:02 test-preload-081901 kubelet[1153]: E1018 09:47:02.253649    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:47:02 test-preload-081901 kubelet[1153]: E1018 09:47:02.253747    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume podName:185d6fca-ef26-409c-b0e2-bee25d2af498 nodeName:}" failed. No retries permitted until 2025-10-18 09:47:03.2537339 +0000 UTC m=+6.667037276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume") pod "coredns-668d6bf9bc-9bx7z" (UID: "185d6fca-ef26-409c-b0e2-bee25d2af498") : object "kube-system"/"coredns" not registered
	Oct 18 09:47:02 test-preload-081901 kubelet[1153]: I1018 09:47:02.838257    1153 scope.go:117] "RemoveContainer" containerID="ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7"
	Oct 18 09:47:03 test-preload-081901 kubelet[1153]: I1018 09:47:03.050648    1153 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 18 09:47:03 test-preload-081901 kubelet[1153]: E1018 09:47:03.265927    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 18 09:47:03 test-preload-081901 kubelet[1153]: E1018 09:47:03.266008    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume podName:185d6fca-ef26-409c-b0e2-bee25d2af498 nodeName:}" failed. No retries permitted until 2025-10-18 09:47:05.265991746 +0000 UTC m=+8.679295133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/185d6fca-ef26-409c-b0e2-bee25d2af498-config-volume") pod "coredns-668d6bf9bc-9bx7z" (UID: "185d6fca-ef26-409c-b0e2-bee25d2af498") : object "kube-system"/"coredns" not registered
	Oct 18 09:47:06 test-preload-081901 kubelet[1153]: E1018 09:47:06.780166    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780826779459026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:47:06 test-preload-081901 kubelet[1153]: E1018 09:47:06.780486    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780826779459026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:47:14 test-preload-081901 kubelet[1153]: I1018 09:47:14.137360    1153 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:47:16 test-preload-081901 kubelet[1153]: E1018 09:47:16.783199    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780836782365082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 18 09:47:16 test-preload-081901 kubelet[1153]: E1018 09:47:16.783235    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760780836782365082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [07202f42ca5417662799bde83c79a074ebb99f93c46a1ce63d592748eaac7c96] <==
	I1018 09:47:03.039413       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:47:03.062828       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:47:03.063385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:47:20.473852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:47:20.473996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-081901_db222245-5479-4c41-bcbe-b5843d03d729!
	I1018 09:47:20.474973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"603f39ce-94eb-43ad-957d-e383cfbe31c9", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-081901_db222245-5479-4c41-bcbe-b5843d03d729 became leader
	I1018 09:47:20.574558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-081901_db222245-5479-4c41-bcbe-b5843d03d729!
	
	
	==> storage-provisioner [ed7f9e7caee1109759219cd2a5b55af2b37f24679cc056fdf724adb83a877aa7] <==
	I1018 09:47:02.229909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:47:02.235645       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-081901 -n test-preload-081901
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-081901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-081901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-081901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-081901: (1.001412408s)
--- FAIL: TestPreload (131.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (84.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-551330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:54:17.228302  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-551330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.401986267s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-551330] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-551330" primary control-plane node in "pause-551330" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-551330" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:54:14.114506  147357 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:54:14.115108  147357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:14.115125  147357 out.go:374] Setting ErrFile to fd 2...
	I1018 09:54:14.115149  147357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:14.115783  147357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:54:14.116393  147357 out.go:368] Setting JSON to false
	I1018 09:54:14.117411  147357 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5794,"bootTime":1760775460,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:54:14.117471  147357 start.go:141] virtualization: kvm guest
	I1018 09:54:14.119532  147357 out.go:179] * [pause-551330] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:54:14.120991  147357 notify.go:220] Checking for updates...
	I1018 09:54:14.121006  147357 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:54:14.124455  147357 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:54:14.126170  147357 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:54:14.130841  147357 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:14.132528  147357 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:54:14.133984  147357 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:54:14.136026  147357 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:14.136696  147357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:14.136774  147357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:14.158923  147357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I1018 09:54:14.159491  147357 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:14.160023  147357 main.go:141] libmachine: Using API Version  1
	I1018 09:54:14.160059  147357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:14.160525  147357 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:14.160768  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:14.161072  147357 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:54:14.161454  147357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:14.161504  147357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:14.181408  147357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I1018 09:54:14.182114  147357 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:14.182743  147357 main.go:141] libmachine: Using API Version  1
	I1018 09:54:14.182775  147357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:14.183259  147357 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:14.183458  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:14.220221  147357 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 09:54:14.221755  147357 start.go:305] selected driver: kvm2
	I1018 09:54:14.221782  147357 start.go:925] validating driver "kvm2" against &{Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:14.222001  147357 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:54:14.222535  147357 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:14.222676  147357 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:14.240612  147357 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:14.240686  147357 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:14.255556  147357 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:14.256344  147357 cni.go:84] Creating CNI manager for ""
	I1018 09:54:14.256400  147357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:14.256469  147357 start.go:349] cluster config:
	{Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-551330 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:14.256599  147357 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:14.258317  147357 out.go:179] * Starting "pause-551330" primary control-plane node in "pause-551330" cluster
	I1018 09:54:14.259394  147357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:14.259449  147357 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:54:14.259460  147357 cache.go:58] Caching tarball of preloaded images
	I1018 09:54:14.259598  147357 preload.go:233] Found /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:54:14.259609  147357 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:54:14.259722  147357 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/config.json ...
	I1018 09:54:14.259995  147357 start.go:360] acquireMachinesLock for pause-551330: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:54:38.962332  147357 start.go:364] duration metric: took 24.702301489s to acquireMachinesLock for "pause-551330"
	I1018 09:54:38.962390  147357 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:54:38.962398  147357 fix.go:54] fixHost starting: 
	I1018 09:54:38.962817  147357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:38.962855  147357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:38.979503  147357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I1018 09:54:38.979956  147357 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:38.980456  147357 main.go:141] libmachine: Using API Version  1
	I1018 09:54:38.980481  147357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:38.980936  147357 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:38.981194  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.981378  147357 main.go:141] libmachine: (pause-551330) Calling .GetState
	I1018 09:54:38.982977  147357 fix.go:112] recreateIfNeeded on pause-551330: state=Running err=<nil>
	W1018 09:54:38.983007  147357 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:54:38.985252  147357 out.go:252] * Updating the running kvm2 "pause-551330" VM ...
	I1018 09:54:38.985290  147357 machine.go:93] provisionDockerMachine start ...
	I1018 09:54:38.985309  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.985539  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:38.988542  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989090  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:38.989123  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989325  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:38.989635  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.989850  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.990035  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:38.990231  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.990553  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:38.990567  147357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:54:39.097823  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.097854  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098174  147357 buildroot.go:166] provisioning hostname "pause-551330"
	I1018 09:54:39.098212  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098449  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.101976  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102371  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.102406  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102652  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.102836  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103019  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103152  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.103309  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.103531  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.103542  147357 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-551330 && echo "pause-551330" | sudo tee /etc/hostname
	I1018 09:54:39.229908  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.229947  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.233649  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234005  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.234039  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234273  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.234500  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234680  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234817  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.234984  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.235237  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.235255  147357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-551330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-551330/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-551330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:54:39.347064  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:39.347103  147357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:54:39.347187  147357 buildroot.go:174] setting up certificates
	I1018 09:54:39.347206  147357 provision.go:84] configureAuth start
	I1018 09:54:39.347227  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.347563  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:39.351095  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351587  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.351618  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351960  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.355289  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.355813  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.355848  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.356065  147357 provision.go:143] copyHostCerts
	I1018 09:54:39.356129  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:54:39.356164  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:54:39.356239  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:54:39.356342  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:54:39.356350  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:54:39.356373  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:54:39.356429  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:54:39.356436  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:54:39.356455  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:54:39.356510  147357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.pause-551330 san=[127.0.0.1 192.168.72.173 localhost minikube pause-551330]
	I1018 09:54:39.700579  147357 provision.go:177] copyRemoteCerts
	I1018 09:54:39.700702  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:54:39.700736  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.703988  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704373  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.704403  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704662  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.704897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.705078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.705246  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:39.796151  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:54:39.835425  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:54:39.879533  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:54:39.916276  147357 provision.go:87] duration metric: took 569.05192ms to configureAuth
	I1018 09:54:39.916316  147357 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:54:39.916597  147357 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:39.916720  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.920699  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921180  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.921212  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921477  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.921772  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.921975  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.922130  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.922335  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.922588  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.922609  147357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:54:45.547728  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:54:45.547759  147357 machine.go:96] duration metric: took 6.562461144s to provisionDockerMachine
	I1018 09:54:45.547771  147357 start.go:293] postStartSetup for "pause-551330" (driver="kvm2")
	I1018 09:54:45.547782  147357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:54:45.547799  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.548276  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:54:45.548309  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.552062  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552547  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.552577  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552855  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.553105  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.553313  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.553552  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.639914  147357 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:54:45.645353  147357 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:54:45.645387  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:54:45.645473  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:54:45.645604  147357 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:54:45.645758  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:54:45.659585  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:45.694841  147357 start.go:296] duration metric: took 147.054302ms for postStartSetup
	I1018 09:54:45.694886  147357 fix.go:56] duration metric: took 6.732489537s for fixHost
	I1018 09:54:45.694915  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.698341  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.698803  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.698837  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.699078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.699338  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699528  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699695  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.699923  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:45.700232  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:45.700250  147357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:54:45.810095  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760781285.807505553
	
	I1018 09:54:45.810128  147357 fix.go:216] guest clock: 1760781285.807505553
	I1018 09:54:45.810152  147357 fix.go:229] Guest: 2025-10-18 09:54:45.807505553 +0000 UTC Remote: 2025-10-18 09:54:45.694891594 +0000 UTC m=+31.626040864 (delta=112.613959ms)
	I1018 09:54:45.810186  147357 fix.go:200] guest clock delta is within tolerance: 112.613959ms
	I1018 09:54:45.810194  147357 start.go:83] releasing machines lock for "pause-551330", held for 6.847826758s
	I1018 09:54:45.810229  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.810587  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:45.814246  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.814743  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.814775  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.815056  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815773  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815980  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.816084  147357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:54:45.816160  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.816286  147357 ssh_runner.go:195] Run: cat /version.json
	I1018 09:54:45.816327  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.819953  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820134  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820449  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820481  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820622  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820663  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820699  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820991  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821109  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821201  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821320  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821393  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.821479  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.898763  147357 ssh_runner.go:195] Run: systemctl --version
	I1018 09:54:45.938487  147357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:54:46.095823  147357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:54:46.106551  147357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:54:46.106642  147357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:54:46.124438  147357 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:54:46.124465  147357 start.go:495] detecting cgroup driver to use...
	I1018 09:54:46.124540  147357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:54:46.149929  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:54:46.170694  147357 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:54:46.170787  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:54:46.198018  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:54:46.223925  147357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:54:46.434671  147357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:54:46.628353  147357 docker.go:234] disabling docker service ...
	I1018 09:54:46.628436  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:54:46.659616  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:54:46.678749  147357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:54:46.883707  147357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:54:47.065520  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:54:47.083763  147357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:54:47.110596  147357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:54:47.110666  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.123888  147357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:54:47.123960  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.141027  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.153739  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.167386  147357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:54:47.181822  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.195818  147357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.213241  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.232199  147357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:54:47.246299  147357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:54:47.263519  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:47.457540  147357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:54:54.210056  147357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.752452014s)
	I1018 09:54:54.210106  147357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:54:54.210198  147357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:54:54.215857  147357 start.go:563] Will wait 60s for crictl version
	I1018 09:54:54.215926  147357 ssh_runner.go:195] Run: which crictl
	I1018 09:54:54.219954  147357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:54:54.267482  147357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:54:54.267577  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.301699  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.335217  147357 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 09:54:54.336616  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:54.340024  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340488  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:54.340516  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340841  147357 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1018 09:54:54.346478  147357 kubeadm.go:883] updating cluster {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:54:54.346648  147357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:54.346700  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.393189  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.393219  147357 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:54:54.393288  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.429351  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.429382  147357 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:54:54.429393  147357 kubeadm.go:934] updating node { 192.168.72.173 8443 v1.34.1 crio true true} ...
	I1018 09:54:54.429532  147357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-551330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:54:54.429623  147357 ssh_runner.go:195] Run: crio config
	I1018 09:54:54.481697  147357 cni.go:84] Creating CNI manager for ""
	I1018 09:54:54.481725  147357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:54.481771  147357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:54:54.481808  147357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.173 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-551330 NodeName:pause-551330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:54:54.481985  147357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-551330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.173"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.173"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:54:54.482057  147357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:54:54.495054  147357 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:54:54.495156  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:54:54.507323  147357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 09:54:54.532818  147357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:54:54.554767  147357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:54:54.577108  147357 ssh_runner.go:195] Run: grep 192.168.72.173	control-plane.minikube.internal$ /etc/hosts
	I1018 09:54:54.581771  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:54.748906  147357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:54:54.765440  147357 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330 for IP: 192.168.72.173
	I1018 09:54:54.765464  147357 certs.go:195] generating shared ca certs ...
	I1018 09:54:54.765481  147357 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:54.765688  147357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:54:54.765743  147357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:54:54.765758  147357 certs.go:257] generating profile certs ...
	I1018 09:54:54.765873  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/client.key
	I1018 09:54:54.765955  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key.f7abae6f
	I1018 09:54:54.766011  147357 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key
	I1018 09:54:54.766179  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:54:54.766220  147357 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:54:54.766234  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:54:54.766266  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:54:54.766297  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:54:54.766330  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:54:54.766394  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:54.766996  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:54:54.799419  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:54:54.836447  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:54:54.876190  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:54:54.908602  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:54:54.946763  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:54:55.099316  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:54:55.164040  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:54:55.252436  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:54:55.339043  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:54:55.415069  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:54:55.491732  147357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:54:55.546576  147357 ssh_runner.go:195] Run: openssl version
	I1018 09:54:55.562316  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:54:55.591880  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601866  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601964  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.616288  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:54:55.647017  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:54:55.678662  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691170  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691247  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.713975  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:54:55.742740  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:54:55.778834  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795270  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795346  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.816687  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:54:55.852282  147357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:54:55.864301  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:54:55.886636  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:54:55.909452  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:54:55.926278  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:54:55.941213  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:54:55.955890  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:54:55.974095  147357 kubeadm.go:400] StartCluster: {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:55.974274  147357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:54:55.974352  147357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:54:56.170596  147357 cri.go:89] found id: "29cc8bdc21235a3263fd07af980bbd5afddd5e8bf838d869aee15b79d773a494"
	I1018 09:54:56.170624  147357 cri.go:89] found id: "12ba7f533d86858ba90df34ecdc2481658f40f2fee74ee73c1d4d71422d3ac90"
	I1018 09:54:56.170630  147357 cri.go:89] found id: "9a47998f97871a1bdc1689b83a0f8637d3e8446f5280c36026c063fef6da5dee"
	I1018 09:54:56.170635  147357 cri.go:89] found id: "35e6ebdf38ddd767dbcb32100e38d541fabd6aa49dbcfe4f5c4ec0126f62afd6"
	I1018 09:54:56.170639  147357 cri.go:89] found id: "6cd73c1cfa681b6f01554bc334d6d83ec0b898a4c61889e41fc36e0da6cc8160"
	I1018 09:54:56.170644  147357 cri.go:89] found id: "cf297adff2cd81079a444636d2d0d432f18a698dd99539c0fcaf3442d5dd19d1"
	I1018 09:54:56.170648  147357 cri.go:89] found id: "95dca9a9c58403a13f82a1493979bb1137030c24168e0d5e658e0c4013ac19bc"
	I1018 09:54:56.170652  147357 cri.go:89] found id: "8e2b055b2814c8c9d86ead76882979ac75549da5e8b5ff1fdcfd1559f3bc5d6b"
	I1018 09:54:56.170655  147357 cri.go:89] found id: "a85801441afa7aeb2a2d98a543437e2586b071068cb98586798b3c805b2cd4ae"
	I1018 09:54:56.170664  147357 cri.go:89] found id: "9249eb8ae6f593eba3ce4059af8cd0db63cc9bb6627365a4204933eff5a4ea62"
	I1018 09:54:56.170669  147357 cri.go:89] found id: ""
	I1018 09:54:56.170731  147357 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-551330 -n pause-551330
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-551330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-551330 logs -n 25: (2.312407596s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ ssh     │ -p cilium-882442 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                                   │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo containerd config dump                                                                                                                                                                                                                            │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                                     │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo systemctl cat crio --no-pager                                                                                                                                                                                                                     │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                                           │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo crio config                                                                                                                                                                                                                                       │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ delete  │ -p cilium-882442                                                                                                                                                                                                                                                        │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │ 18 Oct 25 09:52 UTC │
	│ start   │ -p pause-551330 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-551330              │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │ 18 Oct 25 09:54 UTC │
	│ stop    │ stopped-upgrade-461592 stop                                                                                                                                                                                                                                             │ stopped-upgrade-461592    │ jenkins │ v1.32.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:53 UTC │
	│ start   │ -p stopped-upgrade-461592 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ delete  │ -p kubernetes-upgrade-689545                                                                                                                                                                                                                                            │ kubernetes-upgrade-689545 │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:53 UTC │
	│ start   │ -p cert-options-161184 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                     │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p cert-expiration-464564 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ cert-expiration-464564    │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-461592 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ delete  │ -p stopped-upgrade-461592                                                                                                                                                                                                                                               │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:55 UTC │
	│ start   │ -p pause-551330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-551330              │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:55 UTC │
	│ ssh     │ cert-options-161184 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ ssh     │ -p cert-options-161184 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ delete  │ -p cert-options-161184                                                                                                                                                                                                                                                  │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                       │ no-preload-231061         │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ delete  │ -p cert-expiration-464564                                                                                                                                                                                                                                               │ cert-expiration-464564    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                        │ embed-certs-512028        │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-066041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:55 UTC │ 18 Oct 25 09:55 UTC │
	│ stop    │ -p old-k8s-version-066041 --alsologtostderr -v=3                                                                                                                                                                                                                        │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:54:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:54:37.633375  147912 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:54:37.633618  147912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:37.633635  147912 out.go:374] Setting ErrFile to fd 2...
	I1018 09:54:37.633639  147912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:37.634016  147912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:54:37.634722  147912 out.go:368] Setting JSON to false
	I1018 09:54:37.635716  147912 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5818,"bootTime":1760775460,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:54:37.635848  147912 start.go:141] virtualization: kvm guest
	I1018 09:54:37.638001  147912 out.go:179] * [embed-certs-512028] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:54:37.639390  147912 notify.go:220] Checking for updates...
	I1018 09:54:37.639434  147912 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:54:37.640598  147912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:54:37.641987  147912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:54:37.643398  147912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:37.644555  147912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:54:37.645980  147912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:54:37.647952  147912 config.go:182] Loaded profile config "no-preload-231061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:37.648105  147912 config.go:182] Loaded profile config "old-k8s-version-066041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:54:37.648301  147912 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:37.648415  147912 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:54:37.689394  147912 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:54:37.690823  147912 start.go:305] selected driver: kvm2
	I1018 09:54:37.690844  147912 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:54:37.690860  147912 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:54:37.691922  147912 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:37.692033  147912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:37.711131  147912 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:37.711185  147912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:37.726548  147912 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:37.726596  147912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:54:37.726844  147912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:54:37.726877  147912 cni.go:84] Creating CNI manager for ""
	I1018 09:54:37.726923  147912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:37.726932  147912 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 09:54:37.726975  147912 start.go:349] cluster config:
	{Name:embed-certs-512028 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-512028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:37.727061  147912 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:37.728830  147912 out.go:179] * Starting "embed-certs-512028" primary control-plane node in "embed-certs-512028" cluster
	I1018 09:54:33.202315  147724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:33.202471  147724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/config.json ...
	I1018 09:54:33.202507  147724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/config.json: {Name:mk4c4ae2924179b7addfe96c094be3e7eb036dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:33.202543  147724 cache.go:107] acquiring lock: {Name:mk694e0cfe524409f6f44f58811b798691aa11aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202573  147724 cache.go:107] acquiring lock: {Name:mkc1318dfc0a8499a0316ae38be903831a1f7f57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202545  147724 cache.go:107] acquiring lock: {Name:mk41703bfc436ae2592799cfc3287c3240cc1e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202640  147724 cache.go:107] acquiring lock: {Name:mkbbb31643d4357cf85a0da65f3b1a8beafb6de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202674  147724 cache.go:115] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:54:33.202691  147724 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 162.395µs
	I1018 09:54:33.202701  147724 start.go:360] acquireMachinesLock for no-preload-231061: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:54:33.202712  147724 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:54:33.202732  147724 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:54:33.202785  147724 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:54:33.202803  147724 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:54:33.202848  147724 cache.go:107] acquiring lock: {Name:mk0d2e817585d200d58f7d2c6afffbf74d04e57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202852  147724 cache.go:107] acquiring lock: {Name:mk9b9918b731bcee06e67fee4ba588d52dbec6f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202926  147724 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:54:33.202965  147724 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:54:33.202609  147724 cache.go:107] acquiring lock: {Name:mkacf1123e0c583992211df9fbe06e6b9002c23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.203155  147724 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:54:33.203063  147724 cache.go:107] acquiring lock: {Name:mk518c4968b55574cc240941de1656772422774f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.203250  147724 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:54:33.204523  147724 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:54:33.204533  147724 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:54:33.204538  147724 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:54:33.204524  147724 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:54:33.204583  147724 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:54:33.204748  147724 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:54:33.204792  147724 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:54:33.819576  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:54:33.834526  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:54:33.843094  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:54:33.848856  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:54:33.876377  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:54:33.880925  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:54:33.895299  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:54:33.958585  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:54:33.958617  147724 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 756.01182ms
	I1018 09:54:33.958635  147724 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:54:34.178660  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:54:34.178692  147724 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 975.841633ms
	I1018 09:54:34.178710  147724 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:54:35.119820  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:54:35.119874  147724 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.916802558s
	I1018 09:54:35.119893  147724 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:54:35.211164  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:54:35.211203  147724 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.008562245s
	I1018 09:54:35.211224  147724 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:54:35.290227  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:54:35.290270  147724 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.08772638s
	I1018 09:54:35.290289  147724 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:54:35.327658  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:54:35.327694  147724 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.125174563s
	I1018 09:54:35.327708  147724 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:54:35.618217  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:54:35.618254  147724 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.415408937s
	I1018 09:54:35.618271  147724 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:54:35.618297  147724 cache.go:87] Successfully saved all images to host disk.
	I1018 09:54:38.962332  147357 start.go:364] duration metric: took 24.702301489s to acquireMachinesLock for "pause-551330"
	I1018 09:54:38.962390  147357 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:54:38.962398  147357 fix.go:54] fixHost starting: 
	I1018 09:54:38.962817  147357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:38.962855  147357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:38.979503  147357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I1018 09:54:38.979956  147357 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:38.980456  147357 main.go:141] libmachine: Using API Version  1
	I1018 09:54:38.980481  147357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:38.980936  147357 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:38.981194  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.981378  147357 main.go:141] libmachine: (pause-551330) Calling .GetState
	I1018 09:54:38.982977  147357 fix.go:112] recreateIfNeeded on pause-551330: state=Running err=<nil>
	W1018 09:54:38.983007  147357 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:54:38.985252  147357 out.go:252] * Updating the running kvm2 "pause-551330" VM ...
	I1018 09:54:38.985290  147357 machine.go:93] provisionDockerMachine start ...
	I1018 09:54:38.985309  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.985539  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:38.988542  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989090  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:38.989123  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989325  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:38.989635  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.989850  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.990035  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:38.990231  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.990553  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:38.990567  147357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:54:39.097823  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.097854  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098174  147357 buildroot.go:166] provisioning hostname "pause-551330"
	I1018 09:54:39.098212  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098449  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.101976  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102371  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.102406  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102652  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.102836  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103019  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103152  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.103309  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.103531  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.103542  147357 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-551330 && echo "pause-551330" | sudo tee /etc/hostname
	I1018 09:54:36.964788  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.019125  147302 main.go:141] libmachine: (old-k8s-version-066041) found domain IP: 192.168.50.251
	I1018 09:54:37.019171  147302 main.go:141] libmachine: (old-k8s-version-066041) reserving static IP address...
	I1018 09:54:37.019187  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has current primary IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.019881  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-066041", mac: "52:54:00:f6:0c:31", ip: "192.168.50.251"} in network mk-old-k8s-version-066041
	I1018 09:54:37.258315  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Getting to WaitForSSH function...
	I1018 09:54:37.258358  147302 main.go:141] libmachine: (old-k8s-version-066041) reserved static IP address 192.168.50.251 for domain old-k8s-version-066041
	I1018 09:54:37.258393  147302 main.go:141] libmachine: (old-k8s-version-066041) waiting for SSH...
	I1018 09:54:37.261696  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.262220  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.262297  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.262475  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Using SSH client type: external
	I1018 09:54:37.262511  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Using SSH private key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa (-rw-------)
	I1018 09:54:37.262563  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:54:37.262590  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | About to run SSH command:
	I1018 09:54:37.262610  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | exit 0
	I1018 09:54:37.399274  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | SSH cmd err, output: <nil>: 
	I1018 09:54:37.399653  147302 main.go:141] libmachine: (old-k8s-version-066041) domain creation complete
	I1018 09:54:37.399997  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetConfigRaw
	I1018 09:54:37.400737  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:37.400969  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:37.401155  147302 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 09:54:37.401179  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetState
	I1018 09:54:37.402879  147302 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 09:54:37.402893  147302 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 09:54:37.402899  147302 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 09:54:37.402906  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.406094  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.406529  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.406557  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.406736  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.406953  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.407133  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.407325  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.407499  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.407769  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.407781  147302 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 09:54:37.526535  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:37.526565  147302 main.go:141] libmachine: Detecting the provisioner...
	I1018 09:54:37.526575  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.530330  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.530741  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.530775  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.531026  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.531252  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.531449  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.531617  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.531787  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.532030  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.532044  147302 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 09:54:37.650975  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 09:54:37.651061  147302 main.go:141] libmachine: found compatible host: buildroot
	I1018 09:54:37.651074  147302 main.go:141] libmachine: Provisioning with buildroot...
	I1018 09:54:37.651084  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.651387  147302 buildroot.go:166] provisioning hostname "old-k8s-version-066041"
	I1018 09:54:37.651418  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.651639  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.655016  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.655484  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.655515  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.655779  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.655984  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.656192  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.656366  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.656547  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.656851  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.656872  147302 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-066041 && echo "old-k8s-version-066041" | sudo tee /etc/hostname
	I1018 09:54:37.797995  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066041
	
	I1018 09:54:37.798024  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.801544  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.801971  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.802001  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.802237  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.802466  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.802653  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.802811  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.803008  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.803252  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.803270  147302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-066041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-066041/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-066041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:54:37.930333  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:37.930376  147302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:54:37.930408  147302 buildroot.go:174] setting up certificates
	I1018 09:54:37.930423  147302 provision.go:84] configureAuth start
	I1018 09:54:37.930442  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.930795  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:37.934413  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.934897  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.934925  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.935161  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.937762  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.938183  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.938226  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.938420  147302 provision.go:143] copyHostCerts
	I1018 09:54:37.938483  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:54:37.938500  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:54:37.938574  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:54:37.938708  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:54:37.938719  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:54:37.938750  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:54:37.938808  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:54:37.938818  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:54:37.938854  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:54:37.938965  147302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-066041 san=[127.0.0.1 192.168.50.251 localhost minikube old-k8s-version-066041]
	I1018 09:54:38.243173  147302 provision.go:177] copyRemoteCerts
	I1018 09:54:38.243250  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:54:38.243284  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.246611  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.247053  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.247082  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.247342  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.247597  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.247776  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.247982  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.337768  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:54:38.367393  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:54:38.401149  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:54:38.431217  147302 provision.go:87] duration metric: took 500.772136ms to configureAuth
	I1018 09:54:38.431259  147302 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:54:38.431422  147302 config.go:182] Loaded profile config "old-k8s-version-066041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:54:38.431498  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.435250  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.435675  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.435705  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.435943  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.436207  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.436375  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.436500  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.436620  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.436877  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:38.436901  147302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:54:38.688955  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:54:38.688978  147302 main.go:141] libmachine: Checking connection to Docker...
	I1018 09:54:38.688987  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetURL
	I1018 09:54:38.690225  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | using libvirt version 8000000
	I1018 09:54:38.693295  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.693715  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.693747  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.693971  147302 main.go:141] libmachine: Docker is up and running!
	I1018 09:54:38.693984  147302 main.go:141] libmachine: Reticulating splines...
	I1018 09:54:38.693993  147302 client.go:171] duration metric: took 21.938575911s to LocalClient.Create
	I1018 09:54:38.694028  147302 start.go:167] duration metric: took 21.938647418s to libmachine.API.Create "old-k8s-version-066041"
	I1018 09:54:38.694042  147302 start.go:293] postStartSetup for "old-k8s-version-066041" (driver="kvm2")
	I1018 09:54:38.694057  147302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:54:38.694084  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.694359  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:54:38.694385  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.697086  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.697563  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.697594  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.697814  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.698024  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.698281  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.698472  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.788293  147302 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:54:38.793094  147302 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:54:38.793123  147302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:54:38.793224  147302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:54:38.793318  147302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:54:38.793438  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:54:38.805177  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:38.834992  147302 start.go:296] duration metric: took 140.929877ms for postStartSetup
	I1018 09:54:38.835056  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetConfigRaw
	I1018 09:54:38.835854  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:38.838584  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.838946  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.838973  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.839261  147302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/config.json ...
	I1018 09:54:38.839471  147302 start.go:128] duration metric: took 22.213092077s to createHost
	I1018 09:54:38.839497  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.842295  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.842765  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.842796  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.842969  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.843174  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.843358  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.843491  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.843635  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.843891  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:38.843905  147302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:54:38.962108  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760781278.924811921
	
	I1018 09:54:38.962133  147302 fix.go:216] guest clock: 1760781278.924811921
	I1018 09:54:38.962179  147302 fix.go:229] Guest: 2025-10-18 09:54:38.924811921 +0000 UTC Remote: 2025-10-18 09:54:38.839484303 +0000 UTC m=+26.990656459 (delta=85.327618ms)
	I1018 09:54:38.962231  147302 fix.go:200] guest clock delta is within tolerance: 85.327618ms
	I1018 09:54:38.962240  147302 start.go:83] releasing machines lock for "old-k8s-version-066041", held for 22.336036835s
	I1018 09:54:38.962273  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.962648  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:38.966034  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.966411  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.966445  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.966740  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967455  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967670  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967761  147302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:54:38.967823  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.967952  147302 ssh_runner.go:195] Run: cat /version.json
	I1018 09:54:38.967983  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.971470  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.971707  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.971965  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.971993  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.972214  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.972241  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.972270  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.972448  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.972547  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.972654  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.972752  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.972815  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.972952  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.972956  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:39.061030  147302 ssh_runner.go:195] Run: systemctl --version
	I1018 09:54:39.100196  147302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:54:39.269948  147302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:54:39.279307  147302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:54:39.279398  147302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:54:39.302804  147302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:54:39.302835  147302 start.go:495] detecting cgroup driver to use...
	I1018 09:54:39.302909  147302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:54:39.324743  147302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:54:39.342072  147302 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:54:39.342133  147302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:54:39.364426  147302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:54:39.382068  147302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:54:39.541247  147302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:54:39.756993  147302 docker.go:234] disabling docker service ...
	I1018 09:54:39.757061  147302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:54:39.774382  147302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:54:39.789799  147302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:54:39.982894  147302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:54:40.151912  147302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:54:40.170278  147302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:54:40.199305  147302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:54:40.199377  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.214416  147302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:54:40.214492  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.228288  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.240814  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.253368  147302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:54:40.266594  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.279518  147302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.300586  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.313104  147302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:54:40.323839  147302 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:54:40.323916  147302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:54:40.346718  147302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:54:40.359618  147302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:40.509746  147302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:54:40.631903  147302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:54:40.631975  147302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:54:40.637320  147302 start.go:563] Will wait 60s for crictl version
	I1018 09:54:40.637384  147302 ssh_runner.go:195] Run: which crictl
	I1018 09:54:40.641796  147302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:54:40.683365  147302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:54:40.683471  147302 ssh_runner.go:195] Run: crio --version
	I1018 09:54:40.713622  147302 ssh_runner.go:195] Run: crio --version
	I1018 09:54:40.744231  147302 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I1018 09:54:40.745494  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:40.748398  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:40.748723  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:40.748747  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:40.749039  147302 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1018 09:54:40.753642  147302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:54:40.768482  147302 kubeadm.go:883] updating cluster {Name:old-k8s-version-066041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:54:40.768620  147302 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:54:40.768695  147302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:40.804721  147302 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1018 09:54:40.804820  147302 ssh_runner.go:195] Run: which lz4
	I1018 09:54:40.809435  147302 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:54:40.814063  147302 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:54:40.814097  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1018 09:54:37.730096  147912 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:37.730162  147912 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:54:37.730178  147912 cache.go:58] Caching tarball of preloaded images
	I1018 09:54:37.730271  147912 preload.go:233] Found /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:54:37.730285  147912 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:54:37.730418  147912 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/embed-certs-512028/config.json ...
	I1018 09:54:37.730453  147912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/embed-certs-512028/config.json: {Name:mk11a728f68d2fd3984d684d4680f1a594ae15a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:37.730700  147912 start.go:360] acquireMachinesLock for embed-certs-512028: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:54:39.229908  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.229947  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.233649  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234005  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.234039  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234273  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.234500  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234680  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234817  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.234984  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.235237  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.235255  147357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-551330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-551330/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-551330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:54:39.347064  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:39.347103  147357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:54:39.347187  147357 buildroot.go:174] setting up certificates
	I1018 09:54:39.347206  147357 provision.go:84] configureAuth start
	I1018 09:54:39.347227  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.347563  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:39.351095  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351587  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.351618  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351960  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.355289  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.355813  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.355848  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.356065  147357 provision.go:143] copyHostCerts
	I1018 09:54:39.356129  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:54:39.356164  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:54:39.356239  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:54:39.356342  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:54:39.356350  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:54:39.356373  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:54:39.356429  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:54:39.356436  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:54:39.356455  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:54:39.356510  147357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.pause-551330 san=[127.0.0.1 192.168.72.173 localhost minikube pause-551330]
	I1018 09:54:39.700579  147357 provision.go:177] copyRemoteCerts
	I1018 09:54:39.700702  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:54:39.700736  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.703988  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704373  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.704403  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704662  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.704897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.705078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.705246  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:39.796151  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:54:39.835425  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:54:39.879533  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:54:39.916276  147357 provision.go:87] duration metric: took 569.05192ms to configureAuth
	I1018 09:54:39.916316  147357 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:54:39.916597  147357 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:39.916720  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.920699  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921180  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.921212  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921477  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.921772  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.921975  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.922130  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.922335  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.922588  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.922609  147357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:54:45.810291  147724 start.go:364] duration metric: took 12.607562358s to acquireMachinesLock for "no-preload-231061"
	I1018 09:54:45.810369  147724 start.go:93] Provisioning new machine with config: &{Name:no-preload-231061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:no-preload-231061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:54:45.810480  147724 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 09:54:42.480936  147302 crio.go:462] duration metric: took 1.67154313s to copy over tarball
	I1018 09:54:42.481020  147302 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 09:54:44.310878  147302 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.829826211s)
	I1018 09:54:44.310909  147302 crio.go:469] duration metric: took 1.829937329s to extract the tarball
	I1018 09:54:44.310917  147302 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 09:54:44.356503  147302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:44.401885  147302 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:44.401911  147302 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:54:44.401920  147302 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.28.0 crio true true} ...
	I1018 09:54:44.402068  147302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-066041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:54:44.402229  147302 ssh_runner.go:195] Run: crio config
	I1018 09:54:44.447511  147302 cni.go:84] Creating CNI manager for ""
	I1018 09:54:44.447550  147302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:44.447579  147302 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:54:44.447611  147302 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-066041 NodeName:old-k8s-version-066041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:54:44.447786  147302 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-066041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:54:44.447865  147302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 09:54:44.459746  147302 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:54:44.459836  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:54:44.471346  147302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1018 09:54:44.491741  147302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:54:44.512324  147302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1018 09:54:44.532346  147302 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1018 09:54:44.536548  147302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:54:44.551133  147302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:44.699648  147302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:54:44.720051  147302 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041 for IP: 192.168.50.251
	I1018 09:54:44.720082  147302 certs.go:195] generating shared ca certs ...
	I1018 09:54:44.720105  147302 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.720323  147302 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:54:44.720381  147302 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:54:44.720395  147302 certs.go:257] generating profile certs ...
	I1018 09:54:44.720472  147302 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key
	I1018 09:54:44.720503  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt with IP's: []
	I1018 09:54:44.902952  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt ...
	I1018 09:54:44.902986  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: {Name:mk1bd7ee7179de89578d9501a12aef2959c7dd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.903188  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key ...
	I1018 09:54:44.903203  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key: {Name:mk07d294305490e2021d8bc26d7d12c849437a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.903290  147302 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7
	I1018 09:54:44.903307  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.251]
	I1018 09:54:45.098152  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 ...
	I1018 09:54:45.098194  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7: {Name:mkb51f3eccb5c76558dc66d9dac98c0cfd3ab8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.098424  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7 ...
	I1018 09:54:45.098466  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7: {Name:mke44f3de2a7fbcad7a9cc846715c6324b76fdb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.098556  147302 certs.go:382] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt
	I1018 09:54:45.098631  147302 certs.go:386] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key
	I1018 09:54:45.098685  147302 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key
	I1018 09:54:45.098700  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt with IP's: []
	I1018 09:54:45.213527  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt ...
	I1018 09:54:45.213559  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt: {Name:mk750f42c193cb6914dd283f6631a022e4d49119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.213772  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key ...
	I1018 09:54:45.213796  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key: {Name:mk728ed852b0ae0881678a792e48ddf3af4012b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.214035  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:54:45.214081  147302 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:54:45.214088  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:54:45.214119  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:54:45.214180  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:54:45.214230  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:54:45.214290  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:45.215018  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:54:45.249813  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:54:45.284502  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:54:45.315355  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:54:45.349397  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:54:45.380658  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:54:45.418852  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:54:45.452453  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:54:45.490111  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:54:45.530613  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:54:45.565761  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:54:45.598183  147302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:54:45.619504  147302 ssh_runner.go:195] Run: openssl version
	I1018 09:54:45.627145  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:54:45.641829  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.647764  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.647834  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.656285  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:54:45.671903  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:54:45.686925  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.694273  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.694361  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.703070  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:54:45.716791  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:54:45.729968  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.735572  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.735651  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.743223  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:54:45.756957  147302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:54:45.761712  147302 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:54:45.761787  147302 kubeadm.go:400] StartCluster: {Name:old-k8s-version-066041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:45.761899  147302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:54:45.762000  147302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:54:45.803429  147302 cri.go:89] found id: ""
	I1018 09:54:45.803518  147302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:54:45.818562  147302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:54:45.831741  147302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:54:45.844987  147302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:54:45.845009  147302 kubeadm.go:157] found existing configuration files:
	
	I1018 09:54:45.845056  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:54:45.858115  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:54:45.858203  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:54:45.872235  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:54:45.883500  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:54:45.883575  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:54:45.896698  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:54:45.911815  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:54:45.911876  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:54:45.925064  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:54:45.936022  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:54:45.936099  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:54:45.948344  147302 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 09:54:46.026739  147302 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 09:54:46.026797  147302 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:54:46.192281  147302 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:54:46.192464  147302 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:54:46.192587  147302 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:54:46.474848  147302 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:54:46.637184  147302 out.go:252]   - Generating certificates and keys ...
	I1018 09:54:46.637330  147302 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:54:46.637425  147302 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:54:46.689119  147302 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:54:45.845943  147724 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 09:54:45.846172  147724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:45.846216  147724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:45.862854  147724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I1018 09:54:45.863358  147724 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:45.863925  147724 main.go:141] libmachine: Using API Version  1
	I1018 09:54:45.863953  147724 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:45.864366  147724 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:45.864700  147724 main.go:141] libmachine: (no-preload-231061) Calling .GetMachineName
	I1018 09:54:45.864929  147724 main.go:141] libmachine: (no-preload-231061) Calling .DriverName
	I1018 09:54:45.865161  147724 start.go:159] libmachine.API.Create for "no-preload-231061" (driver="kvm2")
	I1018 09:54:45.865190  147724 client.go:168] LocalClient.Create starting
	I1018 09:54:45.865222  147724 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem
	I1018 09:54:45.865260  147724 main.go:141] libmachine: Decoding PEM data...
	I1018 09:54:45.865276  147724 main.go:141] libmachine: Parsing certificate...
	I1018 09:54:45.865335  147724 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem
	I1018 09:54:45.865353  147724 main.go:141] libmachine: Decoding PEM data...
	I1018 09:54:45.865364  147724 main.go:141] libmachine: Parsing certificate...
	I1018 09:54:45.865380  147724 main.go:141] libmachine: Running pre-create checks...
	I1018 09:54:45.865388  147724 main.go:141] libmachine: (no-preload-231061) Calling .PreCreateCheck
	I1018 09:54:45.865784  147724 main.go:141] libmachine: (no-preload-231061) Calling .GetConfigRaw
	I1018 09:54:45.866340  147724 main.go:141] libmachine: Creating machine...
	I1018 09:54:45.866357  147724 main.go:141] libmachine: (no-preload-231061) Calling .Create
	I1018 09:54:45.866503  147724 main.go:141] libmachine: (no-preload-231061) creating domain...
	I1018 09:54:45.866527  147724 main.go:141] libmachine: (no-preload-231061) creating network...
	I1018 09:54:45.868220  147724 main.go:141] libmachine: (no-preload-231061) DBG | found existing default network
	I1018 09:54:45.868429  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network connections='2'>
	I1018 09:54:45.868449  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>default</name>
	I1018 09:54:45.868463  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 09:54:45.868477  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <forward mode='nat'>
	I1018 09:54:45.868488  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <nat>
	I1018 09:54:45.868498  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <port start='1024' end='65535'/>
	I1018 09:54:45.868505  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </nat>
	I1018 09:54:45.868520  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </forward>
	I1018 09:54:45.868534  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 09:54:45.868547  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 09:54:45.868576  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 09:54:45.868597  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:45.868612  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 09:54:45.868621  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:45.868628  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:45.868638  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:45.868648  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.869419  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:45.869263  147991 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013bb0}
	I1018 09:54:45.869441  147724 main.go:141] libmachine: (no-preload-231061) DBG | defining private network:
	I1018 09:54:45.869452  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.869460  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network>
	I1018 09:54:45.869470  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>mk-no-preload-231061</name>
	I1018 09:54:45.869480  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <dns enable='no'/>
	I1018 09:54:45.869493  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 09:54:45.869506  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:45.869516  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 09:54:45.869523  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:45.869554  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:45.869576  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:45.869589  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:46.000496  147724 main.go:141] libmachine: (no-preload-231061) DBG | creating private network mk-no-preload-231061 192.168.39.0/24...
	I1018 09:54:46.083075  147724 main.go:141] libmachine: (no-preload-231061) DBG | private network mk-no-preload-231061 192.168.39.0/24 created
	I1018 09:54:46.083332  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network>
	I1018 09:54:46.083348  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>mk-no-preload-231061</name>
	I1018 09:54:46.083358  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>257b51a5-7a9f-4e55-b4e5-9268ae318ca4</uuid>
	I1018 09:54:46.083370  147724 main.go:141] libmachine: (no-preload-231061) setting up store path in /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 ...
	I1018 09:54:46.083380  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 09:54:46.083394  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <mac address='52:54:00:23:81:66'/>
	I1018 09:54:46.083401  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <dns enable='no'/>
	I1018 09:54:46.083412  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 09:54:46.083423  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:46.083435  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 09:54:46.083445  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:46.083461  147724 main.go:141] libmachine: (no-preload-231061) building disk image from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 09:54:46.083472  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:46.083480  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:46.083501  147724 main.go:141] libmachine: (no-preload-231061) Downloading /home/jenkins/minikube-integration/21764-104457/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 09:54:46.083513  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:46.083537  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:46.083331  147991 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:46.373572  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:46.373396  147991 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/id_rsa...
	I1018 09:54:47.036823  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:47.036658  147991 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk...
	I1018 09:54:47.036870  147724 main.go:141] libmachine: (no-preload-231061) DBG | Writing magic tar header
	I1018 09:54:47.036890  147724 main.go:141] libmachine: (no-preload-231061) DBG | Writing SSH key tar header
	I1018 09:54:47.036903  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:47.036820  147991 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 ...
	I1018 09:54:47.037036  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061
	I1018 09:54:47.037058  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines
	I1018 09:54:47.037073  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 (perms=drwx------)
	I1018 09:54:47.037106  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:47.037131  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines (perms=drwxr-xr-x)
	I1018 09:54:47.037157  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457
	I1018 09:54:47.037169  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube (perms=drwxr-xr-x)
	I1018 09:54:47.037180  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 09:54:47.037192  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457 (perms=drwxrwxr-x)
	I1018 09:54:47.037210  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 09:54:47.037223  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 09:54:47.037236  147724 main.go:141] libmachine: (no-preload-231061) defining domain...
	I1018 09:54:47.037248  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins
	I1018 09:54:47.037261  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home
	I1018 09:54:47.037273  147724 main.go:141] libmachine: (no-preload-231061) DBG | skipping /home - not owner
	I1018 09:54:47.038566  147724 main.go:141] libmachine: (no-preload-231061) defining domain using XML: 
	I1018 09:54:47.038584  147724 main.go:141] libmachine: (no-preload-231061) <domain type='kvm'>
	I1018 09:54:47.038594  147724 main.go:141] libmachine: (no-preload-231061)   <name>no-preload-231061</name>
	I1018 09:54:47.038601  147724 main.go:141] libmachine: (no-preload-231061)   <memory unit='MiB'>3072</memory>
	I1018 09:54:47.038608  147724 main.go:141] libmachine: (no-preload-231061)   <vcpu>2</vcpu>
	I1018 09:54:47.038614  147724 main.go:141] libmachine: (no-preload-231061)   <features>
	I1018 09:54:47.038621  147724 main.go:141] libmachine: (no-preload-231061)     <acpi/>
	I1018 09:54:47.038632  147724 main.go:141] libmachine: (no-preload-231061)     <apic/>
	I1018 09:54:47.038639  147724 main.go:141] libmachine: (no-preload-231061)     <pae/>
	I1018 09:54:47.038648  147724 main.go:141] libmachine: (no-preload-231061)   </features>
	I1018 09:54:47.038680  147724 main.go:141] libmachine: (no-preload-231061)   <cpu mode='host-passthrough'>
	I1018 09:54:47.038716  147724 main.go:141] libmachine: (no-preload-231061)   </cpu>
	I1018 09:54:47.038750  147724 main.go:141] libmachine: (no-preload-231061)   <os>
	I1018 09:54:47.038776  147724 main.go:141] libmachine: (no-preload-231061)     <type>hvm</type>
	I1018 09:54:47.038790  147724 main.go:141] libmachine: (no-preload-231061)     <boot dev='cdrom'/>
	I1018 09:54:47.038800  147724 main.go:141] libmachine: (no-preload-231061)     <boot dev='hd'/>
	I1018 09:54:47.038812  147724 main.go:141] libmachine: (no-preload-231061)     <bootmenu enable='no'/>
	I1018 09:54:47.038821  147724 main.go:141] libmachine: (no-preload-231061)   </os>
	I1018 09:54:47.038830  147724 main.go:141] libmachine: (no-preload-231061)   <devices>
	I1018 09:54:47.038843  147724 main.go:141] libmachine: (no-preload-231061)     <disk type='file' device='cdrom'>
	I1018 09:54:47.038861  147724 main.go:141] libmachine: (no-preload-231061)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/boot2docker.iso'/>
	I1018 09:54:47.038877  147724 main.go:141] libmachine: (no-preload-231061)       <target dev='hdc' bus='scsi'/>
	I1018 09:54:47.038898  147724 main.go:141] libmachine: (no-preload-231061)       <readonly/>
	I1018 09:54:47.038918  147724 main.go:141] libmachine: (no-preload-231061)     </disk>
	I1018 09:54:47.038932  147724 main.go:141] libmachine: (no-preload-231061)     <disk type='file' device='disk'>
	I1018 09:54:47.038947  147724 main.go:141] libmachine: (no-preload-231061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 09:54:47.038963  147724 main.go:141] libmachine: (no-preload-231061)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk'/>
	I1018 09:54:47.038975  147724 main.go:141] libmachine: (no-preload-231061)       <target dev='hda' bus='virtio'/>
	I1018 09:54:47.038988  147724 main.go:141] libmachine: (no-preload-231061)     </disk>
	I1018 09:54:47.039004  147724 main.go:141] libmachine: (no-preload-231061)     <interface type='network'>
	I1018 09:54:47.039018  147724 main.go:141] libmachine: (no-preload-231061)       <source network='mk-no-preload-231061'/>
	I1018 09:54:47.039029  147724 main.go:141] libmachine: (no-preload-231061)       <model type='virtio'/>
	I1018 09:54:47.039041  147724 main.go:141] libmachine: (no-preload-231061)     </interface>
	I1018 09:54:47.039049  147724 main.go:141] libmachine: (no-preload-231061)     <interface type='network'>
	I1018 09:54:47.039061  147724 main.go:141] libmachine: (no-preload-231061)       <source network='default'/>
	I1018 09:54:47.039076  147724 main.go:141] libmachine: (no-preload-231061)       <model type='virtio'/>
	I1018 09:54:47.039087  147724 main.go:141] libmachine: (no-preload-231061)     </interface>
	I1018 09:54:47.039098  147724 main.go:141] libmachine: (no-preload-231061)     <serial type='pty'>
	I1018 09:54:47.039113  147724 main.go:141] libmachine: (no-preload-231061)       <target port='0'/>
	I1018 09:54:47.039123  147724 main.go:141] libmachine: (no-preload-231061)     </serial>
	I1018 09:54:47.039131  147724 main.go:141] libmachine: (no-preload-231061)     <console type='pty'>
	I1018 09:54:47.039160  147724 main.go:141] libmachine: (no-preload-231061)       <target type='serial' port='0'/>
	I1018 09:54:47.039172  147724 main.go:141] libmachine: (no-preload-231061)     </console>
	I1018 09:54:47.039179  147724 main.go:141] libmachine: (no-preload-231061)     <rng model='virtio'>
	I1018 09:54:47.039193  147724 main.go:141] libmachine: (no-preload-231061)       <backend model='random'>/dev/random</backend>
	I1018 09:54:47.039203  147724 main.go:141] libmachine: (no-preload-231061)     </rng>
	I1018 09:54:47.039212  147724 main.go:141] libmachine: (no-preload-231061)   </devices>
	I1018 09:54:47.039222  147724 main.go:141] libmachine: (no-preload-231061) </domain>
	I1018 09:54:47.039246  147724 main.go:141] libmachine: (no-preload-231061) 
	I1018 09:54:47.198970  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:3d:7c:f4 in network default
	I1018 09:54:47.200315  147724 main.go:141] libmachine: (no-preload-231061) starting domain...
	I1018 09:54:47.200391  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:47.200409  147724 main.go:141] libmachine: (no-preload-231061) ensuring networks are active...
	I1018 09:54:47.201319  147724 main.go:141] libmachine: (no-preload-231061) Ensuring network default is active
	I1018 09:54:47.201942  147724 main.go:141] libmachine: (no-preload-231061) Ensuring network mk-no-preload-231061 is active
	I1018 09:54:47.203009  147724 main.go:141] libmachine: (no-preload-231061) getting domain XML...
	I1018 09:54:47.204429  147724 main.go:141] libmachine: (no-preload-231061) DBG | starting domain XML:
	I1018 09:54:47.204451  147724 main.go:141] libmachine: (no-preload-231061) DBG | <domain type='kvm'>
	I1018 09:54:47.204476  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>no-preload-231061</name>
	I1018 09:54:47.204495  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>7d822fd5-f00f-41a7-af38-4e50b606b202</uuid>
	I1018 09:54:47.204515  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 09:54:47.204530  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 09:54:47.204542  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 09:54:47.204549  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <os>
	I1018 09:54:47.204561  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 09:54:47.204572  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <boot dev='cdrom'/>
	I1018 09:54:47.204580  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <boot dev='hd'/>
	I1018 09:54:47.204586  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <bootmenu enable='no'/>
	I1018 09:54:47.204594  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </os>
	I1018 09:54:47.204608  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <features>
	I1018 09:54:47.204617  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <acpi/>
	I1018 09:54:47.204623  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <apic/>
	I1018 09:54:47.204630  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <pae/>
	I1018 09:54:47.204636  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </features>
	I1018 09:54:47.204645  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 09:54:47.204651  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <clock offset='utc'/>
	I1018 09:54:47.204660  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 09:54:47.204666  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_reboot>restart</on_reboot>
	I1018 09:54:47.204705  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_crash>destroy</on_crash>
	I1018 09:54:47.204732  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <devices>
	I1018 09:54:47.204747  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 09:54:47.204757  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <disk type='file' device='cdrom'>
	I1018 09:54:47.204768  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <driver name='qemu' type='raw'/>
	I1018 09:54:47.204784  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/boot2docker.iso'/>
	I1018 09:54:47.204795  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 09:54:47.204806  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <readonly/>
	I1018 09:54:47.204817  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 09:54:47.204829  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </disk>
	I1018 09:54:47.204838  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <disk type='file' device='disk'>
	I1018 09:54:47.204854  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 09:54:47.204868  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk'/>
	I1018 09:54:47.204876  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target dev='hda' bus='virtio'/>
	I1018 09:54:47.204884  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 09:54:47.204888  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </disk>
	I1018 09:54:47.204908  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 09:54:47.204918  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 09:54:47.204926  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </controller>
	I1018 09:54:47.204939  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 09:54:47.204952  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 09:54:47.204966  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 09:54:47.204978  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </controller>
	I1018 09:54:47.204990  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <interface type='network'>
	I1018 09:54:47.205001  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <mac address='52:54:00:e0:ab:92'/>
	I1018 09:54:47.205013  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source network='mk-no-preload-231061'/>
	I1018 09:54:47.205038  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <model type='virtio'/>
	I1018 09:54:47.205060  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 09:54:47.205072  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </interface>
	I1018 09:54:47.205083  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <interface type='network'>
	I1018 09:54:47.205097  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <mac address='52:54:00:3d:7c:f4'/>
	I1018 09:54:47.205111  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source network='default'/>
	I1018 09:54:47.205121  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <model type='virtio'/>
	I1018 09:54:47.205134  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 09:54:47.205161  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </interface>
	I1018 09:54:47.205174  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <serial type='pty'>
	I1018 09:54:47.205187  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target type='isa-serial' port='0'>
	I1018 09:54:47.205201  147724 main.go:141] libmachine: (no-preload-231061) DBG |         <model name='isa-serial'/>
	I1018 09:54:47.205213  147724 main.go:141] libmachine: (no-preload-231061) DBG |       </target>
	I1018 09:54:47.205222  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </serial>
	I1018 09:54:47.205231  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <console type='pty'>
	I1018 09:54:47.205243  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target type='serial' port='0'/>
	I1018 09:54:47.205256  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </console>
	I1018 09:54:47.205272  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <input type='mouse' bus='ps2'/>
	I1018 09:54:47.205290  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 09:54:47.205308  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <audio id='1' type='none'/>
	I1018 09:54:47.205323  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <memballoon model='virtio'>
	I1018 09:54:47.205338  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 09:54:47.205346  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </memballoon>
	I1018 09:54:47.205353  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <rng model='virtio'>
	I1018 09:54:47.205363  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <backend model='random'>/dev/random</backend>
	I1018 09:54:47.205373  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 09:54:47.205380  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </rng>
	I1018 09:54:47.205385  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </devices>
	I1018 09:54:47.205390  147724 main.go:141] libmachine: (no-preload-231061) DBG | </domain>
	I1018 09:54:47.205396  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.547728  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:54:45.547759  147357 machine.go:96] duration metric: took 6.562461144s to provisionDockerMachine
	I1018 09:54:45.547771  147357 start.go:293] postStartSetup for "pause-551330" (driver="kvm2")
	I1018 09:54:45.547782  147357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:54:45.547799  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.548276  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:54:45.548309  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.552062  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552547  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.552577  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552855  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.553105  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.553313  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.553552  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.639914  147357 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:54:45.645353  147357 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:54:45.645387  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:54:45.645473  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:54:45.645604  147357 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:54:45.645758  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:54:45.659585  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:45.694841  147357 start.go:296] duration metric: took 147.054302ms for postStartSetup
	I1018 09:54:45.694886  147357 fix.go:56] duration metric: took 6.732489537s for fixHost
	I1018 09:54:45.694915  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.698341  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.698803  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.698837  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.699078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.699338  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699528  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699695  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.699923  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:45.700232  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:45.700250  147357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:54:45.810095  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760781285.807505553
	
	I1018 09:54:45.810128  147357 fix.go:216] guest clock: 1760781285.807505553
	I1018 09:54:45.810152  147357 fix.go:229] Guest: 2025-10-18 09:54:45.807505553 +0000 UTC Remote: 2025-10-18 09:54:45.694891594 +0000 UTC m=+31.626040864 (delta=112.613959ms)
	I1018 09:54:45.810186  147357 fix.go:200] guest clock delta is within tolerance: 112.613959ms
	I1018 09:54:45.810194  147357 start.go:83] releasing machines lock for "pause-551330", held for 6.847826758s
	I1018 09:54:45.810229  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.810587  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:45.814246  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.814743  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.814775  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.815056  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815773  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815980  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.816084  147357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:54:45.816160  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.816286  147357 ssh_runner.go:195] Run: cat /version.json
	I1018 09:54:45.816327  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.819953  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820134  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820449  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820481  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820622  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820663  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820699  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820991  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821109  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821201  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821320  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821393  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.821479  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.898763  147357 ssh_runner.go:195] Run: systemctl --version
	I1018 09:54:45.938487  147357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:54:46.095823  147357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:54:46.106551  147357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:54:46.106642  147357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:54:46.124438  147357 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:54:46.124465  147357 start.go:495] detecting cgroup driver to use...
	I1018 09:54:46.124540  147357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:54:46.149929  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:54:46.170694  147357 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:54:46.170787  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:54:46.198018  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:54:46.223925  147357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:54:46.434671  147357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:54:46.628353  147357 docker.go:234] disabling docker service ...
	I1018 09:54:46.628436  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:54:46.659616  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:54:46.678749  147357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:54:46.883707  147357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:54:47.065520  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:54:47.083763  147357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:54:47.110596  147357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:54:47.110666  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.123888  147357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:54:47.123960  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.141027  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.153739  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.167386  147357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:54:47.181822  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.195818  147357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.213241  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.232199  147357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:54:47.246299  147357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:54:47.263519  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:47.457540  147357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:54:46.911245  147302 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:54:47.620570  147302 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:54:48.034661  147302 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:54:48.239708  147302 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:54:48.239867  147302 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-066041] and IPs [192.168.50.251 127.0.0.1 ::1]
	I1018 09:54:48.323776  147302 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:54:48.323986  147302 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-066041] and IPs [192.168.50.251 127.0.0.1 ::1]
	I1018 09:54:48.474272  147302 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:54:48.535998  147302 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:54:48.754036  147302 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:54:48.754189  147302 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:54:49.024000  147302 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:54:49.182665  147302 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:54:49.375417  147302 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:54:49.521924  147302 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:54:49.522077  147302 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:54:49.525309  147302 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:54:49.527278  147302 out.go:252]   - Booting up control plane ...
	I1018 09:54:49.527417  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:54:49.527790  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:54:49.529375  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:54:49.548388  147302 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:54:49.549879  147302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:54:49.550070  147302 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:54:49.747895  147302 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:54:49.097918  147724 main.go:141] libmachine: (no-preload-231061) waiting for domain to start...
	I1018 09:54:49.099509  147724 main.go:141] libmachine: (no-preload-231061) domain is now running
	I1018 09:54:49.099537  147724 main.go:141] libmachine: (no-preload-231061) waiting for IP...
	I1018 09:54:49.100383  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.101102  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.101126  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.101474  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.101568  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.101489  147991 retry.go:31] will retry after 256.251401ms: waiting for domain to come up
	I1018 09:54:49.360118  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.360837  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.360861  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.361241  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.361268  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.361215  147991 retry.go:31] will retry after 369.345746ms: waiting for domain to come up
	I1018 09:54:49.731857  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.732528  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.732571  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.732885  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.732934  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.732865  147991 retry.go:31] will retry after 375.412221ms: waiting for domain to come up
	I1018 09:54:50.109876  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:50.110632  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:50.110650  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:50.111035  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:50.111065  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:50.111019  147991 retry.go:31] will retry after 586.376388ms: waiting for domain to come up
	I1018 09:54:50.698916  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:50.699561  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:50.699586  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:50.699975  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:50.700007  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:50.699918  147991 retry.go:31] will retry after 630.515699ms: waiting for domain to come up
	I1018 09:54:51.332627  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:51.333471  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:51.333500  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:51.333936  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:51.334021  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:51.333934  147991 retry.go:31] will retry after 722.312538ms: waiting for domain to come up
	I1018 09:54:52.057791  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:52.058692  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:52.058717  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:52.059068  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:52.059113  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:52.059045  147991 retry.go:31] will retry after 1.066900916s: waiting for domain to come up
	I1018 09:54:54.210056  147357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.752452014s)
	I1018 09:54:54.210106  147357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:54:54.210198  147357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:54:54.215857  147357 start.go:563] Will wait 60s for crictl version
	I1018 09:54:54.215926  147357 ssh_runner.go:195] Run: which crictl
	I1018 09:54:54.219954  147357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:54:54.267482  147357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:54:54.267577  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.301699  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.335217  147357 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 09:54:56.246824  147302 kubeadm.go:318] [apiclient] All control plane components are healthy after 6.504105 seconds
	I1018 09:54:56.247015  147302 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:54:56.270849  147302 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:54:56.817336  147302 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:54:56.818059  147302 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-066041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:54:57.338369  147302 kubeadm.go:318] [bootstrap-token] Using token: 7ie9px.ge97j4y7v23tvun8
	I1018 09:54:57.339808  147302 out.go:252]   - Configuring RBAC rules ...
	I1018 09:54:57.339990  147302 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:54:57.346869  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:54:57.357097  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:54:57.366951  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:54:57.371316  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:54:57.377726  147302 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:54:57.401036  147302 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:54:57.774038  147302 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:54:57.838491  147302 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:54:57.840086  147302 kubeadm.go:318] 
	I1018 09:54:57.840204  147302 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:54:57.840218  147302 kubeadm.go:318] 
	I1018 09:54:57.840319  147302 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:54:57.840332  147302 kubeadm.go:318] 
	I1018 09:54:57.840365  147302 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:54:57.840444  147302 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:54:57.840515  147302 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:54:57.840521  147302 kubeadm.go:318] 
	I1018 09:54:57.840591  147302 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:54:57.840597  147302 kubeadm.go:318] 
	I1018 09:54:57.840664  147302 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:54:57.840670  147302 kubeadm.go:318] 
	I1018 09:54:57.840737  147302 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:54:57.840840  147302 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:54:57.840929  147302 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:54:57.840935  147302 kubeadm.go:318] 
	I1018 09:54:57.841052  147302 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:54:57.841179  147302 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:54:57.841187  147302 kubeadm.go:318] 
	I1018 09:54:57.841306  147302 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7ie9px.ge97j4y7v23tvun8 \
	I1018 09:54:57.841449  147302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 \
	I1018 09:54:57.841478  147302 kubeadm.go:318] 	--control-plane 
	I1018 09:54:57.841483  147302 kubeadm.go:318] 
	I1018 09:54:57.841602  147302 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:54:57.841608  147302 kubeadm.go:318] 
	I1018 09:54:57.841724  147302 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7ie9px.ge97j4y7v23tvun8 \
	I1018 09:54:57.841869  147302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 
	I1018 09:54:57.844131  147302 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:54:57.844187  147302 cni.go:84] Creating CNI manager for ""
	I1018 09:54:57.844200  147302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:57.845955  147302 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:54:53.127500  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:53.128228  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:53.128256  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:53.128525  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:53.128581  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:53.128518  147991 retry.go:31] will retry after 1.043649707s: waiting for domain to come up
	I1018 09:54:54.173620  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:54.174304  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:54.174335  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:54.174642  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:54.174684  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:54.174621  147991 retry.go:31] will retry after 1.599394292s: waiting for domain to come up
	I1018 09:54:55.776612  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:55.777530  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:55.777559  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:55.778014  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:55.778071  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:55.777990  147991 retry.go:31] will retry after 1.636367317s: waiting for domain to come up
	I1018 09:54:57.416780  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:57.417539  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:57.417595  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:57.417906  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:57.417938  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:57.417863  147991 retry.go:31] will retry after 2.20798307s: waiting for domain to come up
	I1018 09:54:54.336616  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:54.340024  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340488  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:54.340516  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340841  147357 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1018 09:54:54.346478  147357 kubeadm.go:883] updating cluster {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:54:54.346648  147357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:54.346700  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.393189  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.393219  147357 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:54:54.393288  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.429351  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.429382  147357 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:54:54.429393  147357 kubeadm.go:934] updating node { 192.168.72.173 8443 v1.34.1 crio true true} ...
	I1018 09:54:54.429532  147357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-551330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:54:54.429623  147357 ssh_runner.go:195] Run: crio config
	I1018 09:54:54.481697  147357 cni.go:84] Creating CNI manager for ""
	I1018 09:54:54.481725  147357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:54.481771  147357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:54:54.481808  147357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.173 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-551330 NodeName:pause-551330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:54:54.481985  147357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-551330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.173"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.173"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:54:54.482057  147357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:54:54.495054  147357 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:54:54.495156  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:54:54.507323  147357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 09:54:54.532818  147357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:54:54.554767  147357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:54:54.577108  147357 ssh_runner.go:195] Run: grep 192.168.72.173	control-plane.minikube.internal$ /etc/hosts
	I1018 09:54:54.581771  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:54.748906  147357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:54:54.765440  147357 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330 for IP: 192.168.72.173
	I1018 09:54:54.765464  147357 certs.go:195] generating shared ca certs ...
	I1018 09:54:54.765481  147357 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:54.765688  147357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:54:54.765743  147357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:54:54.765758  147357 certs.go:257] generating profile certs ...
	I1018 09:54:54.765873  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/client.key
	I1018 09:54:54.765955  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key.f7abae6f
	I1018 09:54:54.766011  147357 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key
	I1018 09:54:54.766179  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:54:54.766220  147357 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:54:54.766234  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:54:54.766266  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:54:54.766297  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:54:54.766330  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:54:54.766394  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:54.766996  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:54:54.799419  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:54:54.836447  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:54:54.876190  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:54:54.908602  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:54:54.946763  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:54:55.099316  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:54:55.164040  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:54:55.252436  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:54:55.339043  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:54:55.415069  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:54:55.491732  147357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:54:55.546576  147357 ssh_runner.go:195] Run: openssl version
	I1018 09:54:55.562316  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:54:55.591880  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601866  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601964  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.616288  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:54:55.647017  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:54:55.678662  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691170  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691247  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.713975  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:54:55.742740  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:54:55.778834  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795270  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795346  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.816687  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:54:55.852282  147357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:54:55.864301  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:54:55.886636  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:54:55.909452  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:54:55.926278  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:54:55.941213  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:54:55.955890  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:54:55.974095  147357 kubeadm.go:400] StartCluster: {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:55.974274  147357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:54:55.974352  147357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:54:56.170596  147357 cri.go:89] found id: "29cc8bdc21235a3263fd07af980bbd5afddd5e8bf838d869aee15b79d773a494"
	I1018 09:54:56.170624  147357 cri.go:89] found id: "12ba7f533d86858ba90df34ecdc2481658f40f2fee74ee73c1d4d71422d3ac90"
	I1018 09:54:56.170630  147357 cri.go:89] found id: "9a47998f97871a1bdc1689b83a0f8637d3e8446f5280c36026c063fef6da5dee"
	I1018 09:54:56.170635  147357 cri.go:89] found id: "35e6ebdf38ddd767dbcb32100e38d541fabd6aa49dbcfe4f5c4ec0126f62afd6"
	I1018 09:54:56.170639  147357 cri.go:89] found id: "6cd73c1cfa681b6f01554bc334d6d83ec0b898a4c61889e41fc36e0da6cc8160"
	I1018 09:54:56.170644  147357 cri.go:89] found id: "cf297adff2cd81079a444636d2d0d432f18a698dd99539c0fcaf3442d5dd19d1"
	I1018 09:54:56.170648  147357 cri.go:89] found id: "95dca9a9c58403a13f82a1493979bb1137030c24168e0d5e658e0c4013ac19bc"
	I1018 09:54:56.170652  147357 cri.go:89] found id: "8e2b055b2814c8c9d86ead76882979ac75549da5e8b5ff1fdcfd1559f3bc5d6b"
	I1018 09:54:56.170655  147357 cri.go:89] found id: "a85801441afa7aeb2a2d98a543437e2586b071068cb98586798b3c805b2cd4ae"
	I1018 09:54:56.170664  147357 cri.go:89] found id: "9249eb8ae6f593eba3ce4059af8cd0db63cc9bb6627365a4204933eff5a4ea62"
	I1018 09:54:56.170669  147357 cri.go:89] found id: ""
	I1018 09:54:56.170731  147357 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-551330 -n pause-551330
helpers_test.go:269: (dbg) Run:  kubectl --context pause-551330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-551330 -n pause-551330
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-551330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-551330 logs -n 25: (3.561081233s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ ssh     │ -p cilium-882442 sudo cat /etc/containerd/config.toml                                                                                                                                                                                                                   │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo containerd config dump                                                                                                                                                                                                                            │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                                     │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo systemctl cat crio --no-pager                                                                                                                                                                                                                     │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                                           │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ ssh     │ -p cilium-882442 sudo crio config                                                                                                                                                                                                                                       │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │                     │
	│ delete  │ -p cilium-882442                                                                                                                                                                                                                                                        │ cilium-882442             │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │ 18 Oct 25 09:52 UTC │
	│ start   │ -p pause-551330 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-551330              │ jenkins │ v1.37.0 │ 18 Oct 25 09:52 UTC │ 18 Oct 25 09:54 UTC │
	│ stop    │ stopped-upgrade-461592 stop                                                                                                                                                                                                                                             │ stopped-upgrade-461592    │ jenkins │ v1.32.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:53 UTC │
	│ start   │ -p stopped-upgrade-461592 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ delete  │ -p kubernetes-upgrade-689545                                                                                                                                                                                                                                            │ kubernetes-upgrade-689545 │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:53 UTC │
	│ start   │ -p cert-options-161184 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                     │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p cert-expiration-464564 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ cert-expiration-464564    │ jenkins │ v1.37.0 │ 18 Oct 25 09:53 UTC │ 18 Oct 25 09:54 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-461592 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ delete  │ -p stopped-upgrade-461592                                                                                                                                                                                                                                               │ stopped-upgrade-461592    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:55 UTC │
	│ start   │ -p pause-551330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-551330              │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:55 UTC │
	│ ssh     │ cert-options-161184 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ ssh     │ -p cert-options-161184 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ delete  │ -p cert-options-161184                                                                                                                                                                                                                                                  │ cert-options-161184       │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                       │ no-preload-231061         │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ delete  │ -p cert-expiration-464564                                                                                                                                                                                                                                               │ cert-expiration-464564    │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │ 18 Oct 25 09:54 UTC │
	│ start   │ -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                        │ embed-certs-512028        │ jenkins │ v1.37.0 │ 18 Oct 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-066041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:55 UTC │ 18 Oct 25 09:55 UTC │
	│ stop    │ -p old-k8s-version-066041 --alsologtostderr -v=3                                                                                                                                                                                                                        │ old-k8s-version-066041    │ jenkins │ v1.37.0 │ 18 Oct 25 09:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:54:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:54:37.633375  147912 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:54:37.633618  147912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:37.633635  147912 out.go:374] Setting ErrFile to fd 2...
	I1018 09:54:37.633639  147912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:54:37.634016  147912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:54:37.634722  147912 out.go:368] Setting JSON to false
	I1018 09:54:37.635716  147912 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5818,"bootTime":1760775460,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:54:37.635848  147912 start.go:141] virtualization: kvm guest
	I1018 09:54:37.638001  147912 out.go:179] * [embed-certs-512028] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:54:37.639390  147912 notify.go:220] Checking for updates...
	I1018 09:54:37.639434  147912 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:54:37.640598  147912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:54:37.641987  147912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:54:37.643398  147912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:37.644555  147912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:54:37.645980  147912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:54:37.647952  147912 config.go:182] Loaded profile config "no-preload-231061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:37.648105  147912 config.go:182] Loaded profile config "old-k8s-version-066041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:54:37.648301  147912 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:37.648415  147912 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:54:37.689394  147912 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:54:37.690823  147912 start.go:305] selected driver: kvm2
	I1018 09:54:37.690844  147912 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:54:37.690860  147912 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:54:37.691922  147912 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:37.692033  147912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:37.711131  147912 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:37.711185  147912 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 09:54:37.726548  147912 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 09:54:37.726596  147912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:54:37.726844  147912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:54:37.726877  147912 cni.go:84] Creating CNI manager for ""
	I1018 09:54:37.726923  147912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:37.726932  147912 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 09:54:37.726975  147912 start.go:349] cluster config:
	{Name:embed-certs-512028 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-512028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:37.727061  147912 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:37.728830  147912 out.go:179] * Starting "embed-certs-512028" primary control-plane node in "embed-certs-512028" cluster
	I1018 09:54:33.202315  147724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:33.202471  147724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/config.json ...
	I1018 09:54:33.202507  147724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/config.json: {Name:mk4c4ae2924179b7addfe96c094be3e7eb036dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:33.202543  147724 cache.go:107] acquiring lock: {Name:mk694e0cfe524409f6f44f58811b798691aa11aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202573  147724 cache.go:107] acquiring lock: {Name:mkc1318dfc0a8499a0316ae38be903831a1f7f57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202545  147724 cache.go:107] acquiring lock: {Name:mk41703bfc436ae2592799cfc3287c3240cc1e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202640  147724 cache.go:107] acquiring lock: {Name:mkbbb31643d4357cf85a0da65f3b1a8beafb6de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202674  147724 cache.go:115] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:54:33.202691  147724 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 162.395µs
	I1018 09:54:33.202701  147724 start.go:360] acquireMachinesLock for no-preload-231061: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:54:33.202712  147724 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:54:33.202732  147724 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:54:33.202785  147724 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:54:33.202803  147724 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:54:33.202848  147724 cache.go:107] acquiring lock: {Name:mk0d2e817585d200d58f7d2c6afffbf74d04e57f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202852  147724 cache.go:107] acquiring lock: {Name:mk9b9918b731bcee06e67fee4ba588d52dbec6f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.202926  147724 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:54:33.202965  147724 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:54:33.202609  147724 cache.go:107] acquiring lock: {Name:mkacf1123e0c583992211df9fbe06e6b9002c23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.203155  147724 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:54:33.203063  147724 cache.go:107] acquiring lock: {Name:mk518c4968b55574cc240941de1656772422774f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:54:33.203250  147724 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:54:33.204523  147724 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:54:33.204533  147724 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:54:33.204538  147724 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:54:33.204524  147724 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:54:33.204583  147724 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:54:33.204748  147724 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:54:33.204792  147724 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:54:33.819576  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:54:33.834526  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:54:33.843094  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:54:33.848856  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:54:33.876377  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:54:33.880925  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:54:33.895299  147724 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:54:33.958585  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:54:33.958617  147724 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 756.01182ms
	I1018 09:54:33.958635  147724 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:54:34.178660  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:54:34.178692  147724 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 975.841633ms
	I1018 09:54:34.178710  147724 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:54:35.119820  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:54:35.119874  147724 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.916802558s
	I1018 09:54:35.119893  147724 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:54:35.211164  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:54:35.211203  147724 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.008562245s
	I1018 09:54:35.211224  147724 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:54:35.290227  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:54:35.290270  147724 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.08772638s
	I1018 09:54:35.290289  147724 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:54:35.327658  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:54:35.327694  147724 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.125174563s
	I1018 09:54:35.327708  147724 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:54:35.618217  147724 cache.go:157] /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:54:35.618254  147724 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.415408937s
	I1018 09:54:35.618271  147724 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:54:35.618297  147724 cache.go:87] Successfully saved all images to host disk.
	I1018 09:54:38.962332  147357 start.go:364] duration metric: took 24.702301489s to acquireMachinesLock for "pause-551330"
	I1018 09:54:38.962390  147357 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:54:38.962398  147357 fix.go:54] fixHost starting: 
	I1018 09:54:38.962817  147357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:38.962855  147357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:38.979503  147357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I1018 09:54:38.979956  147357 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:38.980456  147357 main.go:141] libmachine: Using API Version  1
	I1018 09:54:38.980481  147357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:38.980936  147357 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:38.981194  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.981378  147357 main.go:141] libmachine: (pause-551330) Calling .GetState
	I1018 09:54:38.982977  147357 fix.go:112] recreateIfNeeded on pause-551330: state=Running err=<nil>
	W1018 09:54:38.983007  147357 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:54:38.985252  147357 out.go:252] * Updating the running kvm2 "pause-551330" VM ...
	I1018 09:54:38.985290  147357 machine.go:93] provisionDockerMachine start ...
	I1018 09:54:38.985309  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:38.985539  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:38.988542  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989090  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:38.989123  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:38.989325  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:38.989635  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.989850  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:38.990035  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:38.990231  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.990553  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:38.990567  147357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:54:39.097823  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.097854  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098174  147357 buildroot.go:166] provisioning hostname "pause-551330"
	I1018 09:54:39.098212  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.098449  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.101976  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102371  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.102406  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.102652  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.102836  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103019  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.103152  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.103309  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.103531  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.103542  147357 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-551330 && echo "pause-551330" | sudo tee /etc/hostname
	I1018 09:54:36.964788  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.019125  147302 main.go:141] libmachine: (old-k8s-version-066041) found domain IP: 192.168.50.251
	I1018 09:54:37.019171  147302 main.go:141] libmachine: (old-k8s-version-066041) reserving static IP address...
	I1018 09:54:37.019187  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has current primary IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.019881  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-066041", mac: "52:54:00:f6:0c:31", ip: "192.168.50.251"} in network mk-old-k8s-version-066041
	I1018 09:54:37.258315  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Getting to WaitForSSH function...
	I1018 09:54:37.258358  147302 main.go:141] libmachine: (old-k8s-version-066041) reserved static IP address 192.168.50.251 for domain old-k8s-version-066041
	I1018 09:54:37.258393  147302 main.go:141] libmachine: (old-k8s-version-066041) waiting for SSH...
	I1018 09:54:37.261696  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.262220  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.262297  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.262475  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Using SSH client type: external
	I1018 09:54:37.262511  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | Using SSH private key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa (-rw-------)
	I1018 09:54:37.262563  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1018 09:54:37.262590  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | About to run SSH command:
	I1018 09:54:37.262610  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | exit 0
	I1018 09:54:37.399274  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | SSH cmd err, output: <nil>: 
	I1018 09:54:37.399653  147302 main.go:141] libmachine: (old-k8s-version-066041) domain creation complete
	I1018 09:54:37.399997  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetConfigRaw
	I1018 09:54:37.400737  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:37.400969  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:37.401155  147302 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1018 09:54:37.401179  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetState
	I1018 09:54:37.402879  147302 main.go:141] libmachine: Detecting operating system of created instance...
	I1018 09:54:37.402893  147302 main.go:141] libmachine: Waiting for SSH to be available...
	I1018 09:54:37.402899  147302 main.go:141] libmachine: Getting to WaitForSSH function...
	I1018 09:54:37.402906  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.406094  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.406529  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.406557  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.406736  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.406953  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.407133  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.407325  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.407499  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.407769  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.407781  147302 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1018 09:54:37.526535  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:37.526565  147302 main.go:141] libmachine: Detecting the provisioner...
	I1018 09:54:37.526575  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.530330  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.530741  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.530775  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.531026  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.531252  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.531449  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.531617  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.531787  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.532030  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.532044  147302 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1018 09:54:37.650975  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1018 09:54:37.651061  147302 main.go:141] libmachine: found compatible host: buildroot
	I1018 09:54:37.651074  147302 main.go:141] libmachine: Provisioning with buildroot...
	I1018 09:54:37.651084  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.651387  147302 buildroot.go:166] provisioning hostname "old-k8s-version-066041"
	I1018 09:54:37.651418  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.651639  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.655016  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.655484  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.655515  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.655779  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.655984  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.656192  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.656366  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.656547  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.656851  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.656872  147302 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-066041 && echo "old-k8s-version-066041" | sudo tee /etc/hostname
	I1018 09:54:37.797995  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066041
	
	I1018 09:54:37.798024  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.801544  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.801971  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.802001  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.802237  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:37.802466  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.802653  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:37.802811  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:37.803008  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:37.803252  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:37.803270  147302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-066041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-066041/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-066041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:54:37.930333  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:37.930376  147302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:54:37.930408  147302 buildroot.go:174] setting up certificates
	I1018 09:54:37.930423  147302 provision.go:84] configureAuth start
	I1018 09:54:37.930442  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetMachineName
	I1018 09:54:37.930795  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:37.934413  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.934897  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.934925  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.935161  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:37.937762  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.938183  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:37.938226  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:37.938420  147302 provision.go:143] copyHostCerts
	I1018 09:54:37.938483  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:54:37.938500  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:54:37.938574  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:54:37.938708  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:54:37.938719  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:54:37.938750  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:54:37.938808  147302 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:54:37.938818  147302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:54:37.938854  147302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:54:37.938965  147302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-066041 san=[127.0.0.1 192.168.50.251 localhost minikube old-k8s-version-066041]
	I1018 09:54:38.243173  147302 provision.go:177] copyRemoteCerts
	I1018 09:54:38.243250  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:54:38.243284  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.246611  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.247053  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.247082  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.247342  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.247597  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.247776  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.247982  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.337768  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:54:38.367393  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:54:38.401149  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:54:38.431217  147302 provision.go:87] duration metric: took 500.772136ms to configureAuth
	I1018 09:54:38.431259  147302 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:54:38.431422  147302 config.go:182] Loaded profile config "old-k8s-version-066041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:54:38.431498  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.435250  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.435675  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.435705  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.435943  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.436207  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.436375  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.436500  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.436620  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.436877  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:38.436901  147302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:54:38.688955  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:54:38.688978  147302 main.go:141] libmachine: Checking connection to Docker...
	I1018 09:54:38.688987  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetURL
	I1018 09:54:38.690225  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | using libvirt version 8000000
	I1018 09:54:38.693295  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.693715  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.693747  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.693971  147302 main.go:141] libmachine: Docker is up and running!
	I1018 09:54:38.693984  147302 main.go:141] libmachine: Reticulating splines...
	I1018 09:54:38.693993  147302 client.go:171] duration metric: took 21.938575911s to LocalClient.Create
	I1018 09:54:38.694028  147302 start.go:167] duration metric: took 21.938647418s to libmachine.API.Create "old-k8s-version-066041"
	I1018 09:54:38.694042  147302 start.go:293] postStartSetup for "old-k8s-version-066041" (driver="kvm2")
	I1018 09:54:38.694057  147302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:54:38.694084  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.694359  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:54:38.694385  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.697086  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.697563  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.697594  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.697814  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.698024  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.698281  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.698472  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.788293  147302 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:54:38.793094  147302 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:54:38.793123  147302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:54:38.793224  147302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:54:38.793318  147302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:54:38.793438  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:54:38.805177  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:38.834992  147302 start.go:296] duration metric: took 140.929877ms for postStartSetup
	I1018 09:54:38.835056  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetConfigRaw
	I1018 09:54:38.835854  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:38.838584  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.838946  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.838973  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.839261  147302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/config.json ...
	I1018 09:54:38.839471  147302 start.go:128] duration metric: took 22.213092077s to createHost
	I1018 09:54:38.839497  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.842295  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.842765  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.842796  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.842969  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.843174  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.843358  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.843491  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.843635  147302 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:38.843891  147302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I1018 09:54:38.843905  147302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:54:38.962108  147302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760781278.924811921
	
	I1018 09:54:38.962133  147302 fix.go:216] guest clock: 1760781278.924811921
	I1018 09:54:38.962179  147302 fix.go:229] Guest: 2025-10-18 09:54:38.924811921 +0000 UTC Remote: 2025-10-18 09:54:38.839484303 +0000 UTC m=+26.990656459 (delta=85.327618ms)
	I1018 09:54:38.962231  147302 fix.go:200] guest clock delta is within tolerance: 85.327618ms
	I1018 09:54:38.962240  147302 start.go:83] releasing machines lock for "old-k8s-version-066041", held for 22.336036835s
	I1018 09:54:38.962273  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.962648  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:38.966034  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.966411  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.966445  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.966740  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967455  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967670  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .DriverName
	I1018 09:54:38.967761  147302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:54:38.967823  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.967952  147302 ssh_runner.go:195] Run: cat /version.json
	I1018 09:54:38.967983  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHHostname
	I1018 09:54:38.971470  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.971707  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.971965  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.971993  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.972214  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:38.972241  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:38.972270  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.972448  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHPort
	I1018 09:54:38.972547  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.972654  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHKeyPath
	I1018 09:54:38.972752  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.972815  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetSSHUsername
	I1018 09:54:38.972952  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:38.972956  147302 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/old-k8s-version-066041/id_rsa Username:docker}
	I1018 09:54:39.061030  147302 ssh_runner.go:195] Run: systemctl --version
	I1018 09:54:39.100196  147302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:54:39.269948  147302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:54:39.279307  147302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:54:39.279398  147302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:54:39.302804  147302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:54:39.302835  147302 start.go:495] detecting cgroup driver to use...
	I1018 09:54:39.302909  147302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:54:39.324743  147302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:54:39.342072  147302 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:54:39.342133  147302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:54:39.364426  147302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:54:39.382068  147302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:54:39.541247  147302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:54:39.756993  147302 docker.go:234] disabling docker service ...
	I1018 09:54:39.757061  147302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:54:39.774382  147302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:54:39.789799  147302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:54:39.982894  147302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:54:40.151912  147302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:54:40.170278  147302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:54:40.199305  147302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:54:40.199377  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.214416  147302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:54:40.214492  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.228288  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.240814  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.253368  147302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:54:40.266594  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.279518  147302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.300586  147302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:40.313104  147302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:54:40.323839  147302 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 09:54:40.323916  147302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 09:54:40.346718  147302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:54:40.359618  147302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:40.509746  147302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:54:40.631903  147302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:54:40.631975  147302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:54:40.637320  147302 start.go:563] Will wait 60s for crictl version
	I1018 09:54:40.637384  147302 ssh_runner.go:195] Run: which crictl
	I1018 09:54:40.641796  147302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:54:40.683365  147302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:54:40.683471  147302 ssh_runner.go:195] Run: crio --version
	I1018 09:54:40.713622  147302 ssh_runner.go:195] Run: crio --version
	I1018 09:54:40.744231  147302 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I1018 09:54:40.745494  147302 main.go:141] libmachine: (old-k8s-version-066041) Calling .GetIP
	I1018 09:54:40.748398  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:40.748723  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:0c:31", ip: ""} in network mk-old-k8s-version-066041: {Iface:virbr4 ExpiryTime:2025-10-18 10:54:33 +0000 UTC Type:0 Mac:52:54:00:f6:0c:31 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:old-k8s-version-066041 Clientid:01:52:54:00:f6:0c:31}
	I1018 09:54:40.748747  147302 main.go:141] libmachine: (old-k8s-version-066041) DBG | domain old-k8s-version-066041 has defined IP address 192.168.50.251 and MAC address 52:54:00:f6:0c:31 in network mk-old-k8s-version-066041
	I1018 09:54:40.749039  147302 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1018 09:54:40.753642  147302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:54:40.768482  147302 kubeadm.go:883] updating cluster {Name:old-k8s-version-066041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:54:40.768620  147302 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:54:40.768695  147302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:40.804721  147302 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1018 09:54:40.804820  147302 ssh_runner.go:195] Run: which lz4
	I1018 09:54:40.809435  147302 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1018 09:54:40.814063  147302 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1018 09:54:40.814097  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1018 09:54:37.730096  147912 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:37.730162  147912 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:54:37.730178  147912 cache.go:58] Caching tarball of preloaded images
	I1018 09:54:37.730271  147912 preload.go:233] Found /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:54:37.730285  147912 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:54:37.730418  147912 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/embed-certs-512028/config.json ...
	I1018 09:54:37.730453  147912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/embed-certs-512028/config.json: {Name:mk11a728f68d2fd3984d684d4680f1a594ae15a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:37.730700  147912 start.go:360] acquireMachinesLock for embed-certs-512028: {Name:mk2e837b552f1de7aa96cf58cf0f422840e69787 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1018 09:54:39.229908  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-551330
	
	I1018 09:54:39.229947  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.233649  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234005  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.234039  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.234273  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.234500  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234680  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.234817  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.234984  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.235237  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.235255  147357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-551330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-551330/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-551330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:54:39.347064  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:54:39.347103  147357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21764-104457/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-104457/.minikube}
	I1018 09:54:39.347187  147357 buildroot.go:174] setting up certificates
	I1018 09:54:39.347206  147357 provision.go:84] configureAuth start
	I1018 09:54:39.347227  147357 main.go:141] libmachine: (pause-551330) Calling .GetMachineName
	I1018 09:54:39.347563  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:39.351095  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351587  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.351618  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.351960  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.355289  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.355813  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.355848  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.356065  147357 provision.go:143] copyHostCerts
	I1018 09:54:39.356129  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem, removing ...
	I1018 09:54:39.356164  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem
	I1018 09:54:39.356239  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/ca.pem (1082 bytes)
	I1018 09:54:39.356342  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem, removing ...
	I1018 09:54:39.356350  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem
	I1018 09:54:39.356373  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/cert.pem (1123 bytes)
	I1018 09:54:39.356429  147357 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem, removing ...
	I1018 09:54:39.356436  147357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem
	I1018 09:54:39.356455  147357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-104457/.minikube/key.pem (1675 bytes)
	I1018 09:54:39.356510  147357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem org=jenkins.pause-551330 san=[127.0.0.1 192.168.72.173 localhost minikube pause-551330]
	I1018 09:54:39.700579  147357 provision.go:177] copyRemoteCerts
	I1018 09:54:39.700702  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:54:39.700736  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.703988  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704373  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.704403  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.704662  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.704897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.705078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.705246  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:39.796151  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 09:54:39.835425  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:54:39.879533  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:54:39.916276  147357 provision.go:87] duration metric: took 569.05192ms to configureAuth
	I1018 09:54:39.916316  147357 buildroot.go:189] setting minikube options for container-runtime
	I1018 09:54:39.916597  147357 config.go:182] Loaded profile config "pause-551330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:54:39.916720  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:39.920699  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921180  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:39.921212  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:39.921477  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:39.921772  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.921975  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:39.922130  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:39.922335  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:39.922588  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:39.922609  147357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:54:45.810291  147724 start.go:364] duration metric: took 12.607562358s to acquireMachinesLock for "no-preload-231061"
	I1018 09:54:45.810369  147724 start.go:93] Provisioning new machine with config: &{Name:no-preload-231061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:no-preload-231061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:54:45.810480  147724 start.go:125] createHost starting for "" (driver="kvm2")
	I1018 09:54:42.480936  147302 crio.go:462] duration metric: took 1.67154313s to copy over tarball
	I1018 09:54:42.481020  147302 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1018 09:54:44.310878  147302 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.829826211s)
	I1018 09:54:44.310909  147302 crio.go:469] duration metric: took 1.829937329s to extract the tarball
	I1018 09:54:44.310917  147302 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1018 09:54:44.356503  147302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:44.401885  147302 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:44.401911  147302 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:54:44.401920  147302 kubeadm.go:934] updating node { 192.168.50.251 8443 v1.28.0 crio true true} ...
	I1018 09:54:44.402068  147302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-066041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:54:44.402229  147302 ssh_runner.go:195] Run: crio config
	I1018 09:54:44.447511  147302 cni.go:84] Creating CNI manager for ""
	I1018 09:54:44.447550  147302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:44.447579  147302 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:54:44.447611  147302 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-066041 NodeName:old-k8s-version-066041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:54:44.447786  147302 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-066041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:54:44.447865  147302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 09:54:44.459746  147302 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:54:44.459836  147302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:54:44.471346  147302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1018 09:54:44.491741  147302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:54:44.512324  147302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1018 09:54:44.532346  147302 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I1018 09:54:44.536548  147302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:54:44.551133  147302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:44.699648  147302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:54:44.720051  147302 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041 for IP: 192.168.50.251
	I1018 09:54:44.720082  147302 certs.go:195] generating shared ca certs ...
	I1018 09:54:44.720105  147302 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.720323  147302 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:54:44.720381  147302 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:54:44.720395  147302 certs.go:257] generating profile certs ...
	I1018 09:54:44.720472  147302 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key
	I1018 09:54:44.720503  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt with IP's: []
	I1018 09:54:44.902952  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt ...
	I1018 09:54:44.902986  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: {Name:mk1bd7ee7179de89578d9501a12aef2959c7dd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.903188  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key ...
	I1018 09:54:44.903203  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.key: {Name:mk07d294305490e2021d8bc26d7d12c849437a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:44.903290  147302 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7
	I1018 09:54:44.903307  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.251]
	I1018 09:54:45.098152  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 ...
	I1018 09:54:45.098194  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7: {Name:mkb51f3eccb5c76558dc66d9dac98c0cfd3ab8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.098424  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7 ...
	I1018 09:54:45.098466  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7: {Name:mke44f3de2a7fbcad7a9cc846715c6324b76fdb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.098556  147302 certs.go:382] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt.33486ef7 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt
	I1018 09:54:45.098631  147302 certs.go:386] copying /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key.33486ef7 -> /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key
	I1018 09:54:45.098685  147302 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key
	I1018 09:54:45.098700  147302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt with IP's: []
	I1018 09:54:45.213527  147302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt ...
	I1018 09:54:45.213559  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt: {Name:mk750f42c193cb6914dd283f6631a022e4d49119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.213772  147302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key ...
	I1018 09:54:45.213796  147302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key: {Name:mk728ed852b0ae0881678a792e48ddf3af4012b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:45.214035  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:54:45.214081  147302 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:54:45.214088  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:54:45.214119  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:54:45.214180  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:54:45.214230  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:54:45.214290  147302 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:45.215018  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:54:45.249813  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:54:45.284502  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:54:45.315355  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:54:45.349397  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:54:45.380658  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:54:45.418852  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:54:45.452453  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:54:45.490111  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:54:45.530613  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:54:45.565761  147302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:54:45.598183  147302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:54:45.619504  147302 ssh_runner.go:195] Run: openssl version
	I1018 09:54:45.627145  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:54:45.641829  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.647764  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.647834  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:54:45.656285  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:54:45.671903  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:54:45.686925  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.694273  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.694361  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:54:45.703070  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:54:45.716791  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:54:45.729968  147302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.735572  147302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.735651  147302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:45.743223  147302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:54:45.756957  147302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:54:45.761712  147302 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:54:45.761787  147302 kubeadm.go:400] StartCluster: {Name:old-k8s-version-066041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-066041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:45.761899  147302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:54:45.762000  147302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:54:45.803429  147302 cri.go:89] found id: ""
	I1018 09:54:45.803518  147302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:54:45.818562  147302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:54:45.831741  147302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:54:45.844987  147302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:54:45.845009  147302 kubeadm.go:157] found existing configuration files:
	
	I1018 09:54:45.845056  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:54:45.858115  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:54:45.858203  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:54:45.872235  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:54:45.883500  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:54:45.883575  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:54:45.896698  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:54:45.911815  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:54:45.911876  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:54:45.925064  147302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:54:45.936022  147302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:54:45.936099  147302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:54:45.948344  147302 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1018 09:54:46.026739  147302 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 09:54:46.026797  147302 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:54:46.192281  147302 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:54:46.192464  147302 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:54:46.192587  147302 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:54:46.474848  147302 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:54:46.637184  147302 out.go:252]   - Generating certificates and keys ...
	I1018 09:54:46.637330  147302 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:54:46.637425  147302 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:54:46.689119  147302 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:54:45.845943  147724 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1018 09:54:45.846172  147724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:54:45.846216  147724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:54:45.862854  147724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I1018 09:54:45.863358  147724 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:54:45.863925  147724 main.go:141] libmachine: Using API Version  1
	I1018 09:54:45.863953  147724 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:54:45.864366  147724 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:54:45.864700  147724 main.go:141] libmachine: (no-preload-231061) Calling .GetMachineName
	I1018 09:54:45.864929  147724 main.go:141] libmachine: (no-preload-231061) Calling .DriverName
	I1018 09:54:45.865161  147724 start.go:159] libmachine.API.Create for "no-preload-231061" (driver="kvm2")
	I1018 09:54:45.865190  147724 client.go:168] LocalClient.Create starting
	I1018 09:54:45.865222  147724 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem
	I1018 09:54:45.865260  147724 main.go:141] libmachine: Decoding PEM data...
	I1018 09:54:45.865276  147724 main.go:141] libmachine: Parsing certificate...
	I1018 09:54:45.865335  147724 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem
	I1018 09:54:45.865353  147724 main.go:141] libmachine: Decoding PEM data...
	I1018 09:54:45.865364  147724 main.go:141] libmachine: Parsing certificate...
	I1018 09:54:45.865380  147724 main.go:141] libmachine: Running pre-create checks...
	I1018 09:54:45.865388  147724 main.go:141] libmachine: (no-preload-231061) Calling .PreCreateCheck
	I1018 09:54:45.865784  147724 main.go:141] libmachine: (no-preload-231061) Calling .GetConfigRaw
	I1018 09:54:45.866340  147724 main.go:141] libmachine: Creating machine...
	I1018 09:54:45.866357  147724 main.go:141] libmachine: (no-preload-231061) Calling .Create
	I1018 09:54:45.866503  147724 main.go:141] libmachine: (no-preload-231061) creating domain...
	I1018 09:54:45.866527  147724 main.go:141] libmachine: (no-preload-231061) creating network...
	I1018 09:54:45.868220  147724 main.go:141] libmachine: (no-preload-231061) DBG | found existing default network
	I1018 09:54:45.868429  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network connections='2'>
	I1018 09:54:45.868449  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>default</name>
	I1018 09:54:45.868463  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1018 09:54:45.868477  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <forward mode='nat'>
	I1018 09:54:45.868488  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <nat>
	I1018 09:54:45.868498  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <port start='1024' end='65535'/>
	I1018 09:54:45.868505  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </nat>
	I1018 09:54:45.868520  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </forward>
	I1018 09:54:45.868534  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1018 09:54:45.868547  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1018 09:54:45.868576  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1018 09:54:45.868597  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:45.868612  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1018 09:54:45.868621  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:45.868628  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:45.868638  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:45.868648  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.869419  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:45.869263  147991 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013bb0}
	I1018 09:54:45.869441  147724 main.go:141] libmachine: (no-preload-231061) DBG | defining private network:
	I1018 09:54:45.869452  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.869460  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network>
	I1018 09:54:45.869470  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>mk-no-preload-231061</name>
	I1018 09:54:45.869480  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <dns enable='no'/>
	I1018 09:54:45.869493  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 09:54:45.869506  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:45.869516  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 09:54:45.869523  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:45.869554  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:45.869576  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:45.869589  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:46.000496  147724 main.go:141] libmachine: (no-preload-231061) DBG | creating private network mk-no-preload-231061 192.168.39.0/24...
	I1018 09:54:46.083075  147724 main.go:141] libmachine: (no-preload-231061) DBG | private network mk-no-preload-231061 192.168.39.0/24 created
	I1018 09:54:46.083332  147724 main.go:141] libmachine: (no-preload-231061) DBG | <network>
	I1018 09:54:46.083348  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>mk-no-preload-231061</name>
	I1018 09:54:46.083358  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>257b51a5-7a9f-4e55-b4e5-9268ae318ca4</uuid>
	I1018 09:54:46.083370  147724 main.go:141] libmachine: (no-preload-231061) setting up store path in /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 ...
	I1018 09:54:46.083380  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1018 09:54:46.083394  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <mac address='52:54:00:23:81:66'/>
	I1018 09:54:46.083401  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <dns enable='no'/>
	I1018 09:54:46.083412  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1018 09:54:46.083423  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <dhcp>
	I1018 09:54:46.083435  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1018 09:54:46.083445  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </dhcp>
	I1018 09:54:46.083461  147724 main.go:141] libmachine: (no-preload-231061) building disk image from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 09:54:46.083472  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </ip>
	I1018 09:54:46.083480  147724 main.go:141] libmachine: (no-preload-231061) DBG | </network>
	I1018 09:54:46.083501  147724 main.go:141] libmachine: (no-preload-231061) Downloading /home/jenkins/minikube-integration/21764-104457/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1018 09:54:46.083513  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:46.083537  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:46.083331  147991 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:46.373572  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:46.373396  147991 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/id_rsa...
	I1018 09:54:47.036823  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:47.036658  147991 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk...
	I1018 09:54:47.036870  147724 main.go:141] libmachine: (no-preload-231061) DBG | Writing magic tar header
	I1018 09:54:47.036890  147724 main.go:141] libmachine: (no-preload-231061) DBG | Writing SSH key tar header
	I1018 09:54:47.036903  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:47.036820  147991 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 ...
	I1018 09:54:47.037036  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061
	I1018 09:54:47.037058  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube/machines
	I1018 09:54:47.037073  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061 (perms=drwx------)
	I1018 09:54:47.037106  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:54:47.037131  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube/machines (perms=drwxr-xr-x)
	I1018 09:54:47.037157  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21764-104457
	I1018 09:54:47.037169  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457/.minikube (perms=drwxr-xr-x)
	I1018 09:54:47.037180  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1018 09:54:47.037192  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration/21764-104457 (perms=drwxrwxr-x)
	I1018 09:54:47.037210  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1018 09:54:47.037223  147724 main.go:141] libmachine: (no-preload-231061) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1018 09:54:47.037236  147724 main.go:141] libmachine: (no-preload-231061) defining domain...
	I1018 09:54:47.037248  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home/jenkins
	I1018 09:54:47.037261  147724 main.go:141] libmachine: (no-preload-231061) DBG | checking permissions on dir: /home
	I1018 09:54:47.037273  147724 main.go:141] libmachine: (no-preload-231061) DBG | skipping /home - not owner
	I1018 09:54:47.038566  147724 main.go:141] libmachine: (no-preload-231061) defining domain using XML: 
	I1018 09:54:47.038584  147724 main.go:141] libmachine: (no-preload-231061) <domain type='kvm'>
	I1018 09:54:47.038594  147724 main.go:141] libmachine: (no-preload-231061)   <name>no-preload-231061</name>
	I1018 09:54:47.038601  147724 main.go:141] libmachine: (no-preload-231061)   <memory unit='MiB'>3072</memory>
	I1018 09:54:47.038608  147724 main.go:141] libmachine: (no-preload-231061)   <vcpu>2</vcpu>
	I1018 09:54:47.038614  147724 main.go:141] libmachine: (no-preload-231061)   <features>
	I1018 09:54:47.038621  147724 main.go:141] libmachine: (no-preload-231061)     <acpi/>
	I1018 09:54:47.038632  147724 main.go:141] libmachine: (no-preload-231061)     <apic/>
	I1018 09:54:47.038639  147724 main.go:141] libmachine: (no-preload-231061)     <pae/>
	I1018 09:54:47.038648  147724 main.go:141] libmachine: (no-preload-231061)   </features>
	I1018 09:54:47.038680  147724 main.go:141] libmachine: (no-preload-231061)   <cpu mode='host-passthrough'>
	I1018 09:54:47.038716  147724 main.go:141] libmachine: (no-preload-231061)   </cpu>
	I1018 09:54:47.038750  147724 main.go:141] libmachine: (no-preload-231061)   <os>
	I1018 09:54:47.038776  147724 main.go:141] libmachine: (no-preload-231061)     <type>hvm</type>
	I1018 09:54:47.038790  147724 main.go:141] libmachine: (no-preload-231061)     <boot dev='cdrom'/>
	I1018 09:54:47.038800  147724 main.go:141] libmachine: (no-preload-231061)     <boot dev='hd'/>
	I1018 09:54:47.038812  147724 main.go:141] libmachine: (no-preload-231061)     <bootmenu enable='no'/>
	I1018 09:54:47.038821  147724 main.go:141] libmachine: (no-preload-231061)   </os>
	I1018 09:54:47.038830  147724 main.go:141] libmachine: (no-preload-231061)   <devices>
	I1018 09:54:47.038843  147724 main.go:141] libmachine: (no-preload-231061)     <disk type='file' device='cdrom'>
	I1018 09:54:47.038861  147724 main.go:141] libmachine: (no-preload-231061)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/boot2docker.iso'/>
	I1018 09:54:47.038877  147724 main.go:141] libmachine: (no-preload-231061)       <target dev='hdc' bus='scsi'/>
	I1018 09:54:47.038898  147724 main.go:141] libmachine: (no-preload-231061)       <readonly/>
	I1018 09:54:47.038918  147724 main.go:141] libmachine: (no-preload-231061)     </disk>
	I1018 09:54:47.038932  147724 main.go:141] libmachine: (no-preload-231061)     <disk type='file' device='disk'>
	I1018 09:54:47.038947  147724 main.go:141] libmachine: (no-preload-231061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1018 09:54:47.038963  147724 main.go:141] libmachine: (no-preload-231061)       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk'/>
	I1018 09:54:47.038975  147724 main.go:141] libmachine: (no-preload-231061)       <target dev='hda' bus='virtio'/>
	I1018 09:54:47.038988  147724 main.go:141] libmachine: (no-preload-231061)     </disk>
	I1018 09:54:47.039004  147724 main.go:141] libmachine: (no-preload-231061)     <interface type='network'>
	I1018 09:54:47.039018  147724 main.go:141] libmachine: (no-preload-231061)       <source network='mk-no-preload-231061'/>
	I1018 09:54:47.039029  147724 main.go:141] libmachine: (no-preload-231061)       <model type='virtio'/>
	I1018 09:54:47.039041  147724 main.go:141] libmachine: (no-preload-231061)     </interface>
	I1018 09:54:47.039049  147724 main.go:141] libmachine: (no-preload-231061)     <interface type='network'>
	I1018 09:54:47.039061  147724 main.go:141] libmachine: (no-preload-231061)       <source network='default'/>
	I1018 09:54:47.039076  147724 main.go:141] libmachine: (no-preload-231061)       <model type='virtio'/>
	I1018 09:54:47.039087  147724 main.go:141] libmachine: (no-preload-231061)     </interface>
	I1018 09:54:47.039098  147724 main.go:141] libmachine: (no-preload-231061)     <serial type='pty'>
	I1018 09:54:47.039113  147724 main.go:141] libmachine: (no-preload-231061)       <target port='0'/>
	I1018 09:54:47.039123  147724 main.go:141] libmachine: (no-preload-231061)     </serial>
	I1018 09:54:47.039131  147724 main.go:141] libmachine: (no-preload-231061)     <console type='pty'>
	I1018 09:54:47.039160  147724 main.go:141] libmachine: (no-preload-231061)       <target type='serial' port='0'/>
	I1018 09:54:47.039172  147724 main.go:141] libmachine: (no-preload-231061)     </console>
	I1018 09:54:47.039179  147724 main.go:141] libmachine: (no-preload-231061)     <rng model='virtio'>
	I1018 09:54:47.039193  147724 main.go:141] libmachine: (no-preload-231061)       <backend model='random'>/dev/random</backend>
	I1018 09:54:47.039203  147724 main.go:141] libmachine: (no-preload-231061)     </rng>
	I1018 09:54:47.039212  147724 main.go:141] libmachine: (no-preload-231061)   </devices>
	I1018 09:54:47.039222  147724 main.go:141] libmachine: (no-preload-231061) </domain>
	I1018 09:54:47.039246  147724 main.go:141] libmachine: (no-preload-231061) 
	I1018 09:54:47.198970  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:3d:7c:f4 in network default
	I1018 09:54:47.200315  147724 main.go:141] libmachine: (no-preload-231061) starting domain...
	I1018 09:54:47.200391  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:47.200409  147724 main.go:141] libmachine: (no-preload-231061) ensuring networks are active...
	I1018 09:54:47.201319  147724 main.go:141] libmachine: (no-preload-231061) Ensuring network default is active
	I1018 09:54:47.201942  147724 main.go:141] libmachine: (no-preload-231061) Ensuring network mk-no-preload-231061 is active
	I1018 09:54:47.203009  147724 main.go:141] libmachine: (no-preload-231061) getting domain XML...
	I1018 09:54:47.204429  147724 main.go:141] libmachine: (no-preload-231061) DBG | starting domain XML:
	I1018 09:54:47.204451  147724 main.go:141] libmachine: (no-preload-231061) DBG | <domain type='kvm'>
	I1018 09:54:47.204476  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <name>no-preload-231061</name>
	I1018 09:54:47.204495  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <uuid>7d822fd5-f00f-41a7-af38-4e50b606b202</uuid>
	I1018 09:54:47.204515  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <memory unit='KiB'>3145728</memory>
	I1018 09:54:47.204530  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1018 09:54:47.204542  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <vcpu placement='static'>2</vcpu>
	I1018 09:54:47.204549  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <os>
	I1018 09:54:47.204561  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1018 09:54:47.204572  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <boot dev='cdrom'/>
	I1018 09:54:47.204580  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <boot dev='hd'/>
	I1018 09:54:47.204586  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <bootmenu enable='no'/>
	I1018 09:54:47.204594  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </os>
	I1018 09:54:47.204608  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <features>
	I1018 09:54:47.204617  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <acpi/>
	I1018 09:54:47.204623  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <apic/>
	I1018 09:54:47.204630  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <pae/>
	I1018 09:54:47.204636  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </features>
	I1018 09:54:47.204645  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1018 09:54:47.204651  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <clock offset='utc'/>
	I1018 09:54:47.204660  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_poweroff>destroy</on_poweroff>
	I1018 09:54:47.204666  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_reboot>restart</on_reboot>
	I1018 09:54:47.204705  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <on_crash>destroy</on_crash>
	I1018 09:54:47.204732  147724 main.go:141] libmachine: (no-preload-231061) DBG |   <devices>
	I1018 09:54:47.204747  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1018 09:54:47.204757  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <disk type='file' device='cdrom'>
	I1018 09:54:47.204768  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <driver name='qemu' type='raw'/>
	I1018 09:54:47.204784  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/boot2docker.iso'/>
	I1018 09:54:47.204795  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target dev='hdc' bus='scsi'/>
	I1018 09:54:47.204806  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <readonly/>
	I1018 09:54:47.204817  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1018 09:54:47.204829  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </disk>
	I1018 09:54:47.204838  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <disk type='file' device='disk'>
	I1018 09:54:47.204854  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1018 09:54:47.204868  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source file='/home/jenkins/minikube-integration/21764-104457/.minikube/machines/no-preload-231061/no-preload-231061.rawdisk'/>
	I1018 09:54:47.204876  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target dev='hda' bus='virtio'/>
	I1018 09:54:47.204884  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1018 09:54:47.204888  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </disk>
	I1018 09:54:47.204908  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1018 09:54:47.204918  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1018 09:54:47.204926  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </controller>
	I1018 09:54:47.204939  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1018 09:54:47.204952  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1018 09:54:47.204966  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1018 09:54:47.204978  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </controller>
	I1018 09:54:47.204990  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <interface type='network'>
	I1018 09:54:47.205001  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <mac address='52:54:00:e0:ab:92'/>
	I1018 09:54:47.205013  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source network='mk-no-preload-231061'/>
	I1018 09:54:47.205038  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <model type='virtio'/>
	I1018 09:54:47.205060  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1018 09:54:47.205072  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </interface>
	I1018 09:54:47.205083  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <interface type='network'>
	I1018 09:54:47.205097  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <mac address='52:54:00:3d:7c:f4'/>
	I1018 09:54:47.205111  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <source network='default'/>
	I1018 09:54:47.205121  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <model type='virtio'/>
	I1018 09:54:47.205134  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1018 09:54:47.205161  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </interface>
	I1018 09:54:47.205174  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <serial type='pty'>
	I1018 09:54:47.205187  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target type='isa-serial' port='0'>
	I1018 09:54:47.205201  147724 main.go:141] libmachine: (no-preload-231061) DBG |         <model name='isa-serial'/>
	I1018 09:54:47.205213  147724 main.go:141] libmachine: (no-preload-231061) DBG |       </target>
	I1018 09:54:47.205222  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </serial>
	I1018 09:54:47.205231  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <console type='pty'>
	I1018 09:54:47.205243  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <target type='serial' port='0'/>
	I1018 09:54:47.205256  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </console>
	I1018 09:54:47.205272  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <input type='mouse' bus='ps2'/>
	I1018 09:54:47.205290  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <input type='keyboard' bus='ps2'/>
	I1018 09:54:47.205308  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <audio id='1' type='none'/>
	I1018 09:54:47.205323  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <memballoon model='virtio'>
	I1018 09:54:47.205338  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1018 09:54:47.205346  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </memballoon>
	I1018 09:54:47.205353  147724 main.go:141] libmachine: (no-preload-231061) DBG |     <rng model='virtio'>
	I1018 09:54:47.205363  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <backend model='random'>/dev/random</backend>
	I1018 09:54:47.205373  147724 main.go:141] libmachine: (no-preload-231061) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1018 09:54:47.205380  147724 main.go:141] libmachine: (no-preload-231061) DBG |     </rng>
	I1018 09:54:47.205385  147724 main.go:141] libmachine: (no-preload-231061) DBG |   </devices>
	I1018 09:54:47.205390  147724 main.go:141] libmachine: (no-preload-231061) DBG | </domain>
	I1018 09:54:47.205396  147724 main.go:141] libmachine: (no-preload-231061) DBG | 
	I1018 09:54:45.547728  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:54:45.547759  147357 machine.go:96] duration metric: took 6.562461144s to provisionDockerMachine
	I1018 09:54:45.547771  147357 start.go:293] postStartSetup for "pause-551330" (driver="kvm2")
	I1018 09:54:45.547782  147357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:54:45.547799  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.548276  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:54:45.548309  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.552062  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552547  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.552577  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.552855  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.553105  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.553313  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.553552  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.639914  147357 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:54:45.645353  147357 info.go:137] Remote host: Buildroot 2025.02
	I1018 09:54:45.645387  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/addons for local assets ...
	I1018 09:54:45.645473  147357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-104457/.minikube/files for local assets ...
	I1018 09:54:45.645604  147357 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem -> 1083732.pem in /etc/ssl/certs
	I1018 09:54:45.645758  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:54:45.659585  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:45.694841  147357 start.go:296] duration metric: took 147.054302ms for postStartSetup
	I1018 09:54:45.694886  147357 fix.go:56] duration metric: took 6.732489537s for fixHost
	I1018 09:54:45.694915  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.698341  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.698803  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.698837  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.699078  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.699338  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699528  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.699695  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.699923  147357 main.go:141] libmachine: Using SSH client type: native
	I1018 09:54:45.700232  147357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I1018 09:54:45.700250  147357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1018 09:54:45.810095  147357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760781285.807505553
	
	I1018 09:54:45.810128  147357 fix.go:216] guest clock: 1760781285.807505553
	I1018 09:54:45.810152  147357 fix.go:229] Guest: 2025-10-18 09:54:45.807505553 +0000 UTC Remote: 2025-10-18 09:54:45.694891594 +0000 UTC m=+31.626040864 (delta=112.613959ms)
	I1018 09:54:45.810186  147357 fix.go:200] guest clock delta is within tolerance: 112.613959ms
	I1018 09:54:45.810194  147357 start.go:83] releasing machines lock for "pause-551330", held for 6.847826758s
	I1018 09:54:45.810229  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.810587  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:45.814246  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.814743  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.814775  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.815056  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815773  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.815980  147357 main.go:141] libmachine: (pause-551330) Calling .DriverName
	I1018 09:54:45.816084  147357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:54:45.816160  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.816286  147357 ssh_runner.go:195] Run: cat /version.json
	I1018 09:54:45.816327  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHHostname
	I1018 09:54:45.819953  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820134  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820449  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820481  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820622  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:45.820663  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:45.820699  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820897  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHPort
	I1018 09:54:45.820991  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821109  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHKeyPath
	I1018 09:54:45.821201  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821320  147357 main.go:141] libmachine: (pause-551330) Calling .GetSSHUsername
	I1018 09:54:45.821393  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.821479  147357 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/pause-551330/id_rsa Username:docker}
	I1018 09:54:45.898763  147357 ssh_runner.go:195] Run: systemctl --version
	I1018 09:54:45.938487  147357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:54:46.095823  147357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:54:46.106551  147357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:54:46.106642  147357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:54:46.124438  147357 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:54:46.124465  147357 start.go:495] detecting cgroup driver to use...
	I1018 09:54:46.124540  147357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:54:46.149929  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:54:46.170694  147357 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:54:46.170787  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:54:46.198018  147357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:54:46.223925  147357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:54:46.434671  147357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:54:46.628353  147357 docker.go:234] disabling docker service ...
	I1018 09:54:46.628436  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:54:46.659616  147357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:54:46.678749  147357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:54:46.883707  147357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:54:47.065520  147357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:54:47.083763  147357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:54:47.110596  147357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:54:47.110666  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.123888  147357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:54:47.123960  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.141027  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.153739  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.167386  147357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:54:47.181822  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.195818  147357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.213241  147357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:54:47.232199  147357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:54:47.246299  147357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:54:47.263519  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:47.457540  147357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:54:46.911245  147302 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:54:47.620570  147302 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:54:48.034661  147302 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:54:48.239708  147302 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:54:48.239867  147302 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-066041] and IPs [192.168.50.251 127.0.0.1 ::1]
	I1018 09:54:48.323776  147302 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:54:48.323986  147302 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-066041] and IPs [192.168.50.251 127.0.0.1 ::1]
	I1018 09:54:48.474272  147302 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:54:48.535998  147302 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:54:48.754036  147302 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:54:48.754189  147302 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:54:49.024000  147302 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:54:49.182665  147302 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:54:49.375417  147302 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:54:49.521924  147302 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:54:49.522077  147302 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:54:49.525309  147302 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:54:49.527278  147302 out.go:252]   - Booting up control plane ...
	I1018 09:54:49.527417  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:54:49.527790  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:54:49.529375  147302 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:54:49.548388  147302 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:54:49.549879  147302 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:54:49.550070  147302 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:54:49.747895  147302 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:54:49.097918  147724 main.go:141] libmachine: (no-preload-231061) waiting for domain to start...
	I1018 09:54:49.099509  147724 main.go:141] libmachine: (no-preload-231061) domain is now running
	I1018 09:54:49.099537  147724 main.go:141] libmachine: (no-preload-231061) waiting for IP...
	I1018 09:54:49.100383  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.101102  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.101126  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.101474  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.101568  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.101489  147991 retry.go:31] will retry after 256.251401ms: waiting for domain to come up
	I1018 09:54:49.360118  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.360837  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.360861  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.361241  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.361268  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.361215  147991 retry.go:31] will retry after 369.345746ms: waiting for domain to come up
	I1018 09:54:49.731857  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:49.732528  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:49.732571  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:49.732885  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:49.732934  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:49.732865  147991 retry.go:31] will retry after 375.412221ms: waiting for domain to come up
	I1018 09:54:50.109876  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:50.110632  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:50.110650  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:50.111035  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:50.111065  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:50.111019  147991 retry.go:31] will retry after 586.376388ms: waiting for domain to come up
	I1018 09:54:50.698916  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:50.699561  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:50.699586  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:50.699975  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:50.700007  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:50.699918  147991 retry.go:31] will retry after 630.515699ms: waiting for domain to come up
	I1018 09:54:51.332627  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:51.333471  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:51.333500  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:51.333936  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:51.334021  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:51.333934  147991 retry.go:31] will retry after 722.312538ms: waiting for domain to come up
	I1018 09:54:52.057791  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:52.058692  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:52.058717  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:52.059068  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:52.059113  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:52.059045  147991 retry.go:31] will retry after 1.066900916s: waiting for domain to come up
	I1018 09:54:54.210056  147357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.752452014s)
	I1018 09:54:54.210106  147357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:54:54.210198  147357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:54:54.215857  147357 start.go:563] Will wait 60s for crictl version
	I1018 09:54:54.215926  147357 ssh_runner.go:195] Run: which crictl
	I1018 09:54:54.219954  147357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 09:54:54.267482  147357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1018 09:54:54.267577  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.301699  147357 ssh_runner.go:195] Run: crio --version
	I1018 09:54:54.335217  147357 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1018 09:54:56.246824  147302 kubeadm.go:318] [apiclient] All control plane components are healthy after 6.504105 seconds
	I1018 09:54:56.247015  147302 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:54:56.270849  147302 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:54:56.817336  147302 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:54:56.818059  147302 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-066041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:54:57.338369  147302 kubeadm.go:318] [bootstrap-token] Using token: 7ie9px.ge97j4y7v23tvun8
	I1018 09:54:57.339808  147302 out.go:252]   - Configuring RBAC rules ...
	I1018 09:54:57.339990  147302 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:54:57.346869  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:54:57.357097  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:54:57.366951  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:54:57.371316  147302 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:54:57.377726  147302 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:54:57.401036  147302 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:54:57.774038  147302 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:54:57.838491  147302 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:54:57.840086  147302 kubeadm.go:318] 
	I1018 09:54:57.840204  147302 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:54:57.840218  147302 kubeadm.go:318] 
	I1018 09:54:57.840319  147302 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:54:57.840332  147302 kubeadm.go:318] 
	I1018 09:54:57.840365  147302 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:54:57.840444  147302 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:54:57.840515  147302 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:54:57.840521  147302 kubeadm.go:318] 
	I1018 09:54:57.840591  147302 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:54:57.840597  147302 kubeadm.go:318] 
	I1018 09:54:57.840664  147302 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:54:57.840670  147302 kubeadm.go:318] 
	I1018 09:54:57.840737  147302 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:54:57.840840  147302 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:54:57.840929  147302 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:54:57.840935  147302 kubeadm.go:318] 
	I1018 09:54:57.841052  147302 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:54:57.841179  147302 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:54:57.841187  147302 kubeadm.go:318] 
	I1018 09:54:57.841306  147302 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7ie9px.ge97j4y7v23tvun8 \
	I1018 09:54:57.841449  147302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 \
	I1018 09:54:57.841478  147302 kubeadm.go:318] 	--control-plane 
	I1018 09:54:57.841483  147302 kubeadm.go:318] 
	I1018 09:54:57.841602  147302 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:54:57.841608  147302 kubeadm.go:318] 
	I1018 09:54:57.841724  147302 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7ie9px.ge97j4y7v23tvun8 \
	I1018 09:54:57.841869  147302 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:463854a2cb3078ec8852d42bc5c65ab166124e879b33f52b9deccf651fa13a68 
	I1018 09:54:57.844131  147302 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:54:57.844187  147302 cni.go:84] Creating CNI manager for ""
	I1018 09:54:57.844200  147302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:57.845955  147302 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1018 09:54:53.127500  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:53.128228  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:53.128256  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:53.128525  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:53.128581  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:53.128518  147991 retry.go:31] will retry after 1.043649707s: waiting for domain to come up
	I1018 09:54:54.173620  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:54.174304  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:54.174335  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:54.174642  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:54.174684  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:54.174621  147991 retry.go:31] will retry after 1.599394292s: waiting for domain to come up
	I1018 09:54:55.776612  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:55.777530  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:55.777559  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:55.778014  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:55.778071  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:55.777990  147991 retry.go:31] will retry after 1.636367317s: waiting for domain to come up
	I1018 09:54:57.416780  147724 main.go:141] libmachine: (no-preload-231061) DBG | domain no-preload-231061 has defined MAC address 52:54:00:e0:ab:92 in network mk-no-preload-231061
	I1018 09:54:57.417539  147724 main.go:141] libmachine: (no-preload-231061) DBG | no network interface addresses found for domain no-preload-231061 (source=lease)
	I1018 09:54:57.417595  147724 main.go:141] libmachine: (no-preload-231061) DBG | trying to list again with source=arp
	I1018 09:54:57.417906  147724 main.go:141] libmachine: (no-preload-231061) DBG | unable to find current IP address of domain no-preload-231061 in network mk-no-preload-231061 (interfaces detected: [])
	I1018 09:54:57.417938  147724 main.go:141] libmachine: (no-preload-231061) DBG | I1018 09:54:57.417863  147991 retry.go:31] will retry after 2.20798307s: waiting for domain to come up
	I1018 09:54:54.336616  147357 main.go:141] libmachine: (pause-551330) Calling .GetIP
	I1018 09:54:54.340024  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340488  147357 main.go:141] libmachine: (pause-551330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:e6:0b", ip: ""} in network mk-pause-551330: {Iface:virbr1 ExpiryTime:2025-10-18 10:53:29 +0000 UTC Type:0 Mac:52:54:00:c8:e6:0b Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-551330 Clientid:01:52:54:00:c8:e6:0b}
	I1018 09:54:54.340516  147357 main.go:141] libmachine: (pause-551330) DBG | domain pause-551330 has defined IP address 192.168.72.173 and MAC address 52:54:00:c8:e6:0b in network mk-pause-551330
	I1018 09:54:54.340841  147357 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1018 09:54:54.346478  147357 kubeadm.go:883] updating cluster {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:54:54.346648  147357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:54:54.346700  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.393189  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.393219  147357 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:54:54.393288  147357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:54:54.429351  147357 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:54:54.429382  147357 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:54:54.429393  147357 kubeadm.go:934] updating node { 192.168.72.173 8443 v1.34.1 crio true true} ...
	I1018 09:54:54.429532  147357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-551330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:54:54.429623  147357 ssh_runner.go:195] Run: crio config
	I1018 09:54:54.481697  147357 cni.go:84] Creating CNI manager for ""
	I1018 09:54:54.481725  147357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 09:54:54.481771  147357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:54:54.481808  147357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.173 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-551330 NodeName:pause-551330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:54:54.481985  147357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-551330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.173"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.173"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:54:54.482057  147357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:54:54.495054  147357 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:54:54.495156  147357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:54:54.507323  147357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1018 09:54:54.532818  147357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:54:54.554767  147357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:54:54.577108  147357 ssh_runner.go:195] Run: grep 192.168.72.173	control-plane.minikube.internal$ /etc/hosts
	I1018 09:54:54.581771  147357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:54:54.748906  147357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:54:54.765440  147357 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330 for IP: 192.168.72.173
	I1018 09:54:54.765464  147357 certs.go:195] generating shared ca certs ...
	I1018 09:54:54.765481  147357 certs.go:227] acquiring lock for ca certs: {Name:mk3098e6b394f5f944bbfa1db4d8c1dc07639612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:54:54.765688  147357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key
	I1018 09:54:54.765743  147357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key
	I1018 09:54:54.765758  147357 certs.go:257] generating profile certs ...
	I1018 09:54:54.765873  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/client.key
	I1018 09:54:54.765955  147357 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key.f7abae6f
	I1018 09:54:54.766011  147357 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key
	I1018 09:54:54.766179  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem (1338 bytes)
	W1018 09:54:54.766220  147357 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373_empty.pem, impossibly tiny 0 bytes
	I1018 09:54:54.766234  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 09:54:54.766266  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/ca.pem (1082 bytes)
	I1018 09:54:54.766297  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:54:54.766330  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/certs/key.pem (1675 bytes)
	I1018 09:54:54.766394  147357 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem (1708 bytes)
	I1018 09:54:54.766996  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:54:54.799419  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:54:54.836447  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:54:54.876190  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:54:54.908602  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:54:54.946763  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:54:55.099316  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:54:55.164040  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/pause-551330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:54:55.252436  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/ssl/certs/1083732.pem --> /usr/share/ca-certificates/1083732.pem (1708 bytes)
	I1018 09:54:55.339043  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:54:55.415069  147357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-104457/.minikube/certs/108373.pem --> /usr/share/ca-certificates/108373.pem (1338 bytes)
	I1018 09:54:55.491732  147357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:54:55.546576  147357 ssh_runner.go:195] Run: openssl version
	I1018 09:54:55.562316  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108373.pem && ln -fs /usr/share/ca-certificates/108373.pem /etc/ssl/certs/108373.pem"
	I1018 09:54:55.591880  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601866  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:04 /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.601964  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108373.pem
	I1018 09:54:55.616288  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/108373.pem /etc/ssl/certs/51391683.0"
	I1018 09:54:55.647017  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1083732.pem && ln -fs /usr/share/ca-certificates/1083732.pem /etc/ssl/certs/1083732.pem"
	I1018 09:54:55.678662  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691170  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:04 /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.691247  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1083732.pem
	I1018 09:54:55.713975  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1083732.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:54:55.742740  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:54:55.778834  147357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795270  147357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:56 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.795346  147357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:54:55.816687  147357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:54:55.852282  147357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:54:55.864301  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:54:55.886636  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:54:55.909452  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:54:55.926278  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:54:55.941213  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:54:55.955890  147357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:54:55.974095  147357 kubeadm.go:400] StartCluster: {Name:pause-551330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-551330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:54:55.974274  147357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:54:55.974352  147357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:54:56.170596  147357 cri.go:89] found id: "29cc8bdc21235a3263fd07af980bbd5afddd5e8bf838d869aee15b79d773a494"
	I1018 09:54:56.170624  147357 cri.go:89] found id: "12ba7f533d86858ba90df34ecdc2481658f40f2fee74ee73c1d4d71422d3ac90"
	I1018 09:54:56.170630  147357 cri.go:89] found id: "9a47998f97871a1bdc1689b83a0f8637d3e8446f5280c36026c063fef6da5dee"
	I1018 09:54:56.170635  147357 cri.go:89] found id: "35e6ebdf38ddd767dbcb32100e38d541fabd6aa49dbcfe4f5c4ec0126f62afd6"
	I1018 09:54:56.170639  147357 cri.go:89] found id: "6cd73c1cfa681b6f01554bc334d6d83ec0b898a4c61889e41fc36e0da6cc8160"
	I1018 09:54:56.170644  147357 cri.go:89] found id: "cf297adff2cd81079a444636d2d0d432f18a698dd99539c0fcaf3442d5dd19d1"
	I1018 09:54:56.170648  147357 cri.go:89] found id: "95dca9a9c58403a13f82a1493979bb1137030c24168e0d5e658e0c4013ac19bc"
	I1018 09:54:56.170652  147357 cri.go:89] found id: "8e2b055b2814c8c9d86ead76882979ac75549da5e8b5ff1fdcfd1559f3bc5d6b"
	I1018 09:54:56.170655  147357 cri.go:89] found id: "a85801441afa7aeb2a2d98a543437e2586b071068cb98586798b3c805b2cd4ae"
	I1018 09:54:56.170664  147357 cri.go:89] found id: "9249eb8ae6f593eba3ce4059af8cd0db63cc9bb6627365a4204933eff5a4ea62"
	I1018 09:54:56.170669  147357 cri.go:89] found id: ""
	I1018 09:54:56.170731  147357 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-551330 -n pause-551330
helpers_test.go:269: (dbg) Run:  kubectl --context pause-551330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (84.86s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.83
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.23
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.68
22 TestOffline 58.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 198.76
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.56
35 TestAddons/parallel/Registry 19.55
36 TestAddons/parallel/RegistryCreds 0.72
38 TestAddons/parallel/InspektorGadget 5.33
39 TestAddons/parallel/MetricsServer 7.37
41 TestAddons/parallel/CSI 42.36
42 TestAddons/parallel/Headlamp 21.36
43 TestAddons/parallel/CloudSpanner 6.65
44 TestAddons/parallel/LocalPath 58.33
45 TestAddons/parallel/NvidiaDevicePlugin 6.76
46 TestAddons/parallel/Yakd 10.87
48 TestAddons/StoppedEnableDisable 80.53
49 TestCertOptions 63.01
50 TestCertExpiration 325.97
52 TestForceSystemdFlag 68.02
53 TestForceSystemdEnv 62.36
55 TestKVMDriverInstallOrUpdate 0.68
59 TestErrorSpam/setup 39.46
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.69
63 TestErrorSpam/unpause 1.91
64 TestErrorSpam/stop 5.24
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 57.13
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.79
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 9.87
76 TestFunctional/serial/CacheCmd/cache/add_local 2.65
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.23
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 31.27
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.55
87 TestFunctional/serial/LogsFileCmd 1.54
88 TestFunctional/serial/InvalidService 3.95
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 38.25
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 0.87
98 TestFunctional/parallel/ServiceCmdConnect 9.54
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 48.34
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.4
104 TestFunctional/parallel/MySQL 27.65
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.24
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
114 TestFunctional/parallel/License 0.9
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.75
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
131 TestFunctional/parallel/ProfileCmd/profile_list 0.35
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
133 TestFunctional/parallel/MountCmd/any-port 8.68
134 TestFunctional/parallel/ServiceCmd/List 0.27
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
137 TestFunctional/parallel/ServiceCmd/Format 0.31
138 TestFunctional/parallel/ServiceCmd/URL 0.38
139 TestFunctional/parallel/MountCmd/specific-port 1.59
140 TestFunctional/parallel/MountCmd/VerifyCleanup 0.82
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
145 TestFunctional/parallel/ImageCommands/ImageBuild 7.94
146 TestFunctional/parallel/ImageCommands/Setup 1.72
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.34
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.73
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 207.41
162 TestMultiControlPlane/serial/DeployApp 7.32
163 TestMultiControlPlane/serial/PingHostFromPods 1.25
164 TestMultiControlPlane/serial/AddWorkerNode 47.81
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
167 TestMultiControlPlane/serial/CopyFile 13.49
168 TestMultiControlPlane/serial/StopSecondaryNode 84.29
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.79
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 366.76
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.34
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
175 TestMultiControlPlane/serial/StopCluster 250.62
176 TestMultiControlPlane/serial/RestartCluster 108.76
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
178 TestMultiControlPlane/serial/AddSecondaryNode 85.74
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 53.28
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 8.05
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 85.15
215 TestMountStart/serial/StartWithMountFirst 21.49
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 20.99
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.75
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 20.12
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 96.43
227 TestMultiNode/serial/DeployApp2Nodes 5.42
228 TestMultiNode/serial/PingHostFrom2Pods 0.82
229 TestMultiNode/serial/AddNode 42.01
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.46
233 TestMultiNode/serial/StopNode 2.64
234 TestMultiNode/serial/StartAfterStop 39.29
235 TestMultiNode/serial/RestartKeepsNodes 296.5
236 TestMultiNode/serial/DeleteNode 2.75
237 TestMultiNode/serial/StopMultiNode 145.59
238 TestMultiNode/serial/RestartMultiNode 87.49
239 TestMultiNode/serial/ValidateNameConflict 40.63
246 TestScheduledStopUnix 109.49
250 TestRunningBinaryUpgrade 144.6
252 TestKubernetesUpgrade 131.86
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 103.04
257 TestNoKubernetes/serial/StartWithStopK8s 34.1
258 TestNoKubernetes/serial/Start 34.67
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
260 TestNoKubernetes/serial/ProfileList 6.32
261 TestNoKubernetes/serial/Stop 1.43
262 TestNoKubernetes/serial/StartNoArgs 33.9
263 TestStoppedBinaryUpgrade/Setup 3.36
264 TestStoppedBinaryUpgrade/Upgrade 87.69
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
280 TestNetworkPlugins/group/false 3.76
285 TestPause/serial/Start 76.03
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
288 TestStartStop/group/old-k8s-version/serial/FirstStart 63.56
291 TestStartStop/group/no-preload/serial/FirstStart 85.17
293 TestStartStop/group/embed-certs/serial/FirstStart 94.04
294 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.72
296 TestStartStop/group/old-k8s-version/serial/Stop 75.44
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.71
299 TestStartStop/group/no-preload/serial/DeployApp 11.33
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
301 TestStartStop/group/no-preload/serial/Stop 88.95
302 TestStartStop/group/embed-certs/serial/DeployApp 10.27
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
304 TestStartStop/group/embed-certs/serial/Stop 86.3
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/old-k8s-version/serial/SecondStart 44.04
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.09
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 59.15
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
315 TestStartStop/group/old-k8s-version/serial/Pause 3.01
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 56.66
319 TestStartStop/group/newest-cni/serial/FirstStart 75.81
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 74.94
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
328 TestStartStop/group/no-preload/serial/Pause 3.96
329 TestStartStop/group/embed-certs/serial/Pause 3.95
330 TestNetworkPlugins/group/auto/Start 60.34
331 TestNetworkPlugins/group/kindnet/Start 85.44
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.43
334 TestStartStop/group/newest-cni/serial/Stop 11.16
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
336 TestStartStop/group/newest-cni/serial/SecondStart 66.35
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.27
341 TestNetworkPlugins/group/calico/Start 96.26
342 TestNetworkPlugins/group/auto/KubeletFlags 0.22
343 TestNetworkPlugins/group/auto/NetCatPod 11.3
344 TestNetworkPlugins/group/auto/DNS 0.23
345 TestNetworkPlugins/group/auto/Localhost 0.18
346 TestNetworkPlugins/group/auto/HairPin 0.16
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
350 TestStartStop/group/newest-cni/serial/Pause 3.43
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/custom-flannel/Start 79.28
353 TestNetworkPlugins/group/enable-default-cni/Start 82.86
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
355 TestNetworkPlugins/group/kindnet/NetCatPod 13.25
356 TestNetworkPlugins/group/kindnet/DNS 0.15
357 TestNetworkPlugins/group/kindnet/Localhost 0.13
358 TestNetworkPlugins/group/kindnet/HairPin 0.14
359 TestNetworkPlugins/group/flannel/Start 83.8
360 TestNetworkPlugins/group/calico/ControllerPod 5.12
361 TestNetworkPlugins/group/calico/KubeletFlags 0.35
362 TestNetworkPlugins/group/calico/NetCatPod 28.83
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 20.28
365 TestNetworkPlugins/group/calico/DNS 0.17
366 TestNetworkPlugins/group/calico/Localhost 0.14
367 TestNetworkPlugins/group/calico/HairPin 0.14
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 17.28
370 TestNetworkPlugins/group/custom-flannel/DNS 0.2
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
373 TestNetworkPlugins/group/bridge/Start 59.02
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
379 TestNetworkPlugins/group/flannel/NetCatPod 12.24
380 TestNetworkPlugins/group/flannel/DNS 0.16
381 TestNetworkPlugins/group/flannel/Localhost 0.13
382 TestNetworkPlugins/group/flannel/HairPin 0.13
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 9.26
385 TestNetworkPlugins/group/bridge/DNS 0.14
386 TestNetworkPlugins/group/bridge/Localhost 0.12
387 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (22.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-077632 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-077632 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.826901588s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 08:55:44.861509  108373 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 08:55:44.861655  108373 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-077632
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-077632: exit status 85 (62.425725ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-077632 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-077632 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:55:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:55:22.078056  108386 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:55:22.078199  108386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:22.078210  108386 out.go:374] Setting ErrFile to fd 2...
	I1018 08:55:22.078214  108386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:22.078428  108386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	W1018 08:55:22.078584  108386 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21764-104457/.minikube/config/config.json: open /home/jenkins/minikube-integration/21764-104457/.minikube/config/config.json: no such file or directory
	I1018 08:55:22.079054  108386 out.go:368] Setting JSON to true
	I1018 08:55:22.079973  108386 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2262,"bootTime":1760775460,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:55:22.080075  108386 start.go:141] virtualization: kvm guest
	I1018 08:55:22.082330  108386 out.go:99] [download-only-077632] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:55:22.082471  108386 notify.go:220] Checking for updates...
	W1018 08:55:22.082479  108386 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 08:55:22.083891  108386 out.go:171] MINIKUBE_LOCATION=21764
	I1018 08:55:22.085514  108386 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:55:22.086776  108386 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 08:55:22.088239  108386 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 08:55:22.089736  108386 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:55:22.092039  108386 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:55:22.092355  108386 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:55:22.612822  108386 out.go:99] Using the kvm2 driver based on user configuration
	I1018 08:55:22.612867  108386 start.go:305] selected driver: kvm2
	I1018 08:55:22.612877  108386 start.go:925] validating driver "kvm2" against <nil>
	I1018 08:55:22.613548  108386 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:22.613780  108386 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:22.629665  108386 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:22.629715  108386 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:22.643433  108386 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:22.643490  108386 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:55:22.643983  108386 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 08:55:22.644126  108386 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:55:22.644194  108386 cni.go:84] Creating CNI manager for ""
	I1018 08:55:22.644237  108386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:55:22.644249  108386 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 08:55:22.644300  108386 start.go:349] cluster config:
	{Name:download-only-077632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-077632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:55:22.644500  108386 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:22.646494  108386 out.go:99] Downloading VM boot image ...
	I1018 08:55:22.646544  108386 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1018 08:55:32.731235  108386 out.go:99] Starting "download-only-077632" primary control-plane node in "download-only-077632" cluster
	I1018 08:55:32.731260  108386 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:55:32.830369  108386 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 08:55:32.830413  108386 cache.go:58] Caching tarball of preloaded images
	I1018 08:55:32.830680  108386 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:55:32.832418  108386 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 08:55:32.832441  108386 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 08:55:33.314444  108386 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 08:55:33.314579  108386 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-077632 host does not exist
	  To start a cluster, run: "minikube start -p download-only-077632"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-077632
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-425706 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-425706 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (11.229198156s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 08:55:56.444170  108373 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 08:55:56.444244  108373 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-425706
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-425706: exit status 85 (62.863299ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-077632 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-077632 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │ 18 Oct 25 08:55 UTC │
	│ delete  │ -p download-only-077632                                                                                                                                                                             │ download-only-077632 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │ 18 Oct 25 08:55 UTC │
	│ start   │ -o=json --download-only -p download-only-425706 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-425706 │ jenkins │ v1.37.0 │ 18 Oct 25 08:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:55:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:55:45.255652  108657 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:55:45.255789  108657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:45.255794  108657 out.go:374] Setting ErrFile to fd 2...
	I1018 08:55:45.255799  108657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:45.255976  108657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 08:55:45.256474  108657 out.go:368] Setting JSON to true
	I1018 08:55:45.257313  108657 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2285,"bootTime":1760775460,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:55:45.257418  108657 start.go:141] virtualization: kvm guest
	I1018 08:55:45.259190  108657 out.go:99] [download-only-425706] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:55:45.259331  108657 notify.go:220] Checking for updates...
	I1018 08:55:45.260721  108657 out.go:171] MINIKUBE_LOCATION=21764
	I1018 08:55:45.262068  108657 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:55:45.263224  108657 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 08:55:45.264411  108657 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 08:55:45.265526  108657 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:55:45.267659  108657 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:55:45.267947  108657 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:55:45.298680  108657 out.go:99] Using the kvm2 driver based on user configuration
	I1018 08:55:45.298707  108657 start.go:305] selected driver: kvm2
	I1018 08:55:45.298722  108657 start.go:925] validating driver "kvm2" against <nil>
	I1018 08:55:45.299066  108657 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:45.299183  108657 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:45.313456  108657 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:45.313489  108657 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21764-104457/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1018 08:55:45.327317  108657 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1018 08:55:45.327385  108657 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:55:45.327969  108657 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1018 08:55:45.328116  108657 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:55:45.328158  108657 cni.go:84] Creating CNI manager for ""
	I1018 08:55:45.328217  108657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1018 08:55:45.328228  108657 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1018 08:55:45.328308  108657 start.go:349] cluster config:
	{Name:download-only-425706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-425706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:55:45.328411  108657 iso.go:125] acquiring lock: {Name:mk595382428940cd9914c5b9c5232890ef7481d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:55:45.331325  108657 out.go:99] Starting "download-only-425706" primary control-plane node in "download-only-425706" cluster
	I1018 08:55:45.331357  108657 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:55:45.841996  108657 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:55:45.842032  108657 cache.go:58] Caching tarball of preloaded images
	I1018 08:55:45.842217  108657 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:55:45.844071  108657 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 08:55:45.844098  108657 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 08:55:45.943108  108657 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1018 08:55:45.943170  108657 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21764-104457/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-425706 host does not exist
	  To start a cluster, run: "minikube start -p download-only-425706"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-425706
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 08:55:57.069864  108373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-232384 --alsologtostderr --binary-mirror http://127.0.0.1:36103 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-232384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-232384
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (58.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-377235 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-377235 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.563692068s)
helpers_test.go:175: Cleaning up "offline-crio-377235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-377235
--- PASS: TestOffline (58.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-281483
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-281483: exit status 85 (54.775337ms)

                                                
                                                
-- stdout --
	* Profile "addons-281483" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-281483"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-281483
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-281483: exit status 85 (54.206609ms)

                                                
                                                
-- stdout --
	* Profile "addons-281483" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-281483"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (198.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-281483 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-281483 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m18.761505659s)
--- PASS: TestAddons/Setup (198.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-281483 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-281483 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-281483 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-281483 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d3848d01-e41f-467d-aa3b-5eb78fb5c1a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d3848d01-e41f-467d-aa3b-5eb78fb5c1a2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005404596s
addons_test.go:694: (dbg) Run:  kubectl --context addons-281483 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-281483 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-281483 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.414012ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-z2m56" [3d215353-695f-4b94-af96-f7f4675e103e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005129403s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-h9ssw" [26352e63-4436-4855-b2e8-f4819ae96865] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003729842s
addons_test.go:392: (dbg) Run:  kubectl --context addons-281483 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-281483 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-281483 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.662251385s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 ip
2025/10/18 08:59:54 [DEBUG] GET http://192.168.39.144:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.800949ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-281483
addons_test.go:332: (dbg) Run:  kubectl --context addons-281483 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-c8dxt" [6dd63393-ff91-4b28-bcdb-e40921dc9b49] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004997685s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.249324ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4bbzn" [a841e975-54e5-458f-aead-2b0ca7cee2c3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004913632s
addons_test.go:463: (dbg) Run:  kubectl --context addons-281483 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable metrics-server --alsologtostderr -v=1: (1.28247036s)
--- PASS: TestAddons/parallel/MetricsServer (7.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 08:59:50.041633  108373 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 08:59:50.046297  108373 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 08:59:50.046340  108373 kapi.go:107] duration metric: took 4.717248ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.731703ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-281483 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-281483 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3426994f-18c8-49b2-b5b9-2edadcfad26c] Pending
helpers_test.go:352: "task-pv-pod" [3426994f-18c8-49b2-b5b9-2edadcfad26c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3426994f-18c8-49b2-b5b9-2edadcfad26c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.007327611s
addons_test.go:572: (dbg) Run:  kubectl --context addons-281483 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-281483 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-281483 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-281483 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-281483 delete pod task-pv-pod: (1.359858769s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-281483 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-281483 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-281483 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [9635a94d-b6e6-49a0-8650-35b57829c491] Pending
helpers_test.go:352: "task-pv-pod-restore" [9635a94d-b6e6-49a0-8650-35b57829c491] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [9635a94d-b6e6-49a0-8650-35b57829c491] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004696878s
addons_test.go:614: (dbg) Run:  kubectl --context addons-281483 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-281483 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-281483 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.10276808s)
--- PASS: TestAddons/parallel/CSI (42.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-281483 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-49lbr" [1caa6900-2aef-4f0d-9a17-7ca9b4bac60d] Pending
helpers_test.go:352: "headlamp-6945c6f4d-49lbr" [1caa6900-2aef-4f0d-9a17-7ca9b4bac60d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-49lbr" [1caa6900-2aef-4f0d-9a17-7ca9b4bac60d] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-49lbr" [1caa6900-2aef-4f0d-9a17-7ca9b4bac60d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.00458492s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable headlamp --alsologtostderr -v=1: (6.441413673s)
--- PASS: TestAddons/parallel/Headlamp (21.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-vhtn8" [64b7c3e4-4b2d-4190-b982-b29e21b338e3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004054222s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-281483 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-281483 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2a5dcb42-0350-4974-b650-0a465f607b39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2a5dcb42-0350-4974-b650-0a465f607b39] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2a5dcb42-0350-4974-b650-0a465f607b39] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.005349594s
addons_test.go:967: (dbg) Run:  kubectl --context addons-281483 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 ssh "cat /opt/local-path-provisioner/pvc-508c2e42-fca4-46f4-88c0-fd619d317595_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-281483 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-281483 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.283525972s)
--- PASS: TestAddons/parallel/LocalPath (58.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mhqn2" [9ffae91b-e17e-4dab-89bd-05ac9e5967b4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003464231s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nghsf" [ed188a5d-1407-4456-9fea-c73d98c4411b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005044045s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-281483 addons disable yakd --alsologtostderr -v=1: (5.866527177s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (80.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-281483
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-281483: (1m20.251639395s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-281483
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-281483
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-281483
--- PASS: TestAddons/StoppedEnableDisable (80.53s)

                                                
                                    
x
+
TestCertOptions (63.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-161184 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-161184 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.533537977s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-161184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-161184 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-161184 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-161184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-161184
--- PASS: TestCertOptions (63.01s)

                                                
                                    
x
+
TestCertExpiration (325.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-464564 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-464564 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.683657814s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-464564 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:54:00.314991  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-464564 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.400509066s)
helpers_test.go:175: Cleaning up "cert-expiration-464564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-464564
--- PASS: TestCertExpiration (325.97s)

                                                
                                    
x
+
TestForceSystemdFlag (68.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-657531 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-657531 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.790695571s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-657531 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-657531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-657531
--- PASS: TestForceSystemdFlag (68.02s)

                                                
                                    
x
+
TestForceSystemdEnv (62.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-407131 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-407131 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.605750457s)
helpers_test.go:175: Cleaning up "force-systemd-env-407131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-407131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-407131: (1.758058677s)
--- PASS: TestForceSystemdEnv (62.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.68s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 09:52:57.480403  108373 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 09:52:57.480599  108373 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3070366613/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:52:57.512300  108373 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3070366613/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 09:52:57.512338  108373 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 09:52:57.512533  108373 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 09:52:57.512582  108373 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3070366613/001/docker-machine-driver-kvm2
I1018 09:52:58.016977  108373 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3070366613/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:52:58.034028  108373 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3070366613/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.68s)

                                                
                                    
x
+
TestErrorSpam/setup (39.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-822202 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822202 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:04:17.237552  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.244045  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.255442  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.276933  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.318358  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.399983  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.561596  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:17.883343  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:18.525420  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:19.806900  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:22.369057  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:27.490829  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:04:37.732533  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-822202 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822202 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.462851316s)
--- PASS: TestErrorSpam/setup (39.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop: (2.028931849s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop: (1.565916114s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-822202 --log_dir /tmp/nospam-822202 stop: (1.645424959s)
--- PASS: TestErrorSpam/stop (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21764-104457/.minikube/files/etc/test/nested/copy/108373/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:04:58.214697  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:05:39.177282  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-361078 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.125376259s)
--- PASS: TestFunctional/serial/StartWithProxy (57.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 09:05:49.373768  108373 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-361078 --alsologtostderr -v=8: (32.789774229s)
functional_test.go:678: soft start took 32.790555676s for "functional-361078" cluster.
I1018 09:06:22.163940  108373 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-361078 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:3.1: (2.668274187s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:3.3: (4.159092726s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 cache add registry.k8s.io/pause:latest: (3.044713509s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-361078 /tmp/TestFunctionalserialCacheCmdcacheadd_local2825636377/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache add minikube-local-cache-test:functional-361078
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 cache add minikube-local-cache-test:functional-361078: (2.299868053s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache delete minikube-local-cache-test:functional-361078
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-361078
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.76315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 cache reload: (1.498594199s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 kubectl -- --context functional-361078 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-361078 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 09:07:01.101785  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-361078 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.271830166s)
functional_test.go:776: restart took 31.27195935s for "functional-361078" cluster.
I1018 09:07:09.001689  108373 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-361078 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 logs: (1.551408319s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 logs --file /tmp/TestFunctionalserialLogsFileCmd1542646341/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 logs --file /tmp/TestFunctionalserialLogsFileCmd1542646341/001/logs.txt: (1.540158857s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-361078 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-361078
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-361078: exit status 115 (302.190381ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.22:30206 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-361078 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 config get cpus: exit status 14 (64.213708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 config get cpus: exit status 14 (49.217835ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (38.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-361078 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-361078 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 116297: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (38.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-361078 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (151.102804ms)

                                                
                                                
-- stdout --
	* [functional-361078] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:07:26.733352  116139 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:26.733623  116139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.733634  116139 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:26.733638  116139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.733848  116139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:07:26.734348  116139 out.go:368] Setting JSON to false
	I1018 09:07:26.735348  116139 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2987,"bootTime":1760775460,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:07:26.735450  116139 start.go:141] virtualization: kvm guest
	I1018 09:07:26.740749  116139 out.go:179] * [functional-361078] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:07:26.742393  116139 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:07:26.742400  116139 notify.go:220] Checking for updates...
	I1018 09:07:26.748198  116139 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:07:26.749647  116139 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:07:26.750831  116139 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:07:26.751986  116139 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:07:26.753302  116139 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:07:26.754848  116139 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:26.755293  116139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:26.755387  116139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:26.769505  116139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37813
	I1018 09:07:26.770039  116139 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:26.770685  116139 main.go:141] libmachine: Using API Version  1
	I1018 09:07:26.770714  116139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:26.771081  116139 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:26.771330  116139 main.go:141] libmachine: (functional-361078) Calling .DriverName
	I1018 09:07:26.771605  116139 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:07:26.771946  116139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:26.771997  116139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:26.789604  116139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I1018 09:07:26.790112  116139 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:26.790755  116139 main.go:141] libmachine: Using API Version  1
	I1018 09:07:26.790789  116139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:26.791258  116139 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:26.791469  116139 main.go:141] libmachine: (functional-361078) Calling .DriverName
	I1018 09:07:26.827296  116139 out.go:179] * Using the kvm2 driver based on existing profile
	I1018 09:07:26.828995  116139 start.go:305] selected driver: kvm2
	I1018 09:07:26.829020  116139 start.go:925] validating driver "kvm2" against &{Name:functional-361078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-361078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:26.829211  116139 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:07:26.832029  116139 out.go:203] 
	W1018 09:07:26.833355  116139 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 09:07:26.834854  116139 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-361078 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-361078 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (177.449538ms)

                                                
                                                
-- stdout --
	* [functional-361078] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:07:26.992718  116199 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:26.992842  116199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.992852  116199 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:26.992859  116199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.993223  116199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:07:26.994186  116199 out.go:368] Setting JSON to false
	I1018 09:07:26.995332  116199 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2987,"bootTime":1760775460,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:07:26.995452  116199 start.go:141] virtualization: kvm guest
	I1018 09:07:26.998330  116199 out.go:179] * [functional-361078] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 09:07:27.000424  116199 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:07:27.000405  116199 notify.go:220] Checking for updates...
	I1018 09:07:27.001812  116199 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:07:27.003035  116199 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:07:27.004437  116199 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:07:27.005973  116199 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:07:27.009947  116199 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:07:27.012701  116199 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:27.013329  116199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:27.013384  116199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:27.033478  116199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I1018 09:07:27.034161  116199 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:27.034925  116199 main.go:141] libmachine: Using API Version  1
	I1018 09:07:27.034963  116199 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:27.035394  116199 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:27.035600  116199 main.go:141] libmachine: (functional-361078) Calling .DriverName
	I1018 09:07:27.035946  116199 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:07:27.036422  116199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:07:27.036502  116199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:07:27.054854  116199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1018 09:07:27.055432  116199 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:07:27.056099  116199 main.go:141] libmachine: Using API Version  1
	I1018 09:07:27.056128  116199 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:07:27.056600  116199 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:07:27.056841  116199 main.go:141] libmachine: (functional-361078) Calling .DriverName
	I1018 09:07:27.100411  116199 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1018 09:07:27.101556  116199 start.go:305] selected driver: kvm2
	I1018 09:07:27.101574  116199 start.go:925] validating driver "kvm2" against &{Name:functional-361078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-361078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:27.101730  116199 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:07:27.104200  116199 out.go:203] 
	W1018 09:07:27.105564  116199 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 09:07:27.106901  116199 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-361078 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-361078 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-r2rww" [b6d1001b-e1bc-45b6-bc82-226d2d788909] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-r2rww" [b6d1001b-e1bc-45b6-bc82-226d2d788909] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018050771s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.22:32637
functional_test.go:1680: http://192.168.39.22:32637: success! body:
Request served by hello-node-connect-7d85dfc575-r2rww

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.22:32637
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [75758de3-cee2-42f0-bfaf-042490f768d4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003724113s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-361078 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-361078 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-361078 get pvc myclaim -o=json
I1018 09:07:23.251170  108373 retry.go:31] will retry after 2.230304734s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d9f54177-a576-42cb-8eff-ea0f0ef0c2a4 ResourceVersion:692 Generation:0 CreationTimestamp:2025-10-18 09:07:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001bf38f0 VolumeMode:0xc001bf3900 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-361078 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-361078 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e8106e41-37b8-4883-bf69-a1e3e9d12b42] Pending
helpers_test.go:352: "sp-pod" [e8106e41-37b8-4883-bf69-a1e3e9d12b42] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e8106e41-37b8-4883-bf69-a1e3e9d12b42] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005671395s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-361078 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-361078 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-361078 delete -f testdata/storage-provisioner/pod.yaml: (1.181768741s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-361078 apply -f testdata/storage-provisioner/pod.yaml
I1018 09:07:41.194687  108373 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d962bf04-cb6a-4dea-8ab0-d04a25b0bc4a] Pending
helpers_test.go:352: "sp-pod" [d962bf04-cb6a-4dea-8ab0-d04a25b0bc4a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d962bf04-cb6a-4dea-8ab0-d04a25b0bc4a] Running
2025/10/18 09:08:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004348922s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-361078 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh -n functional-361078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cp functional-361078:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4175374255/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh -n functional-361078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh -n functional-361078 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-361078 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pbkvh" [8507b66d-587f-4810-9ad4-8df931c594fe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pbkvh" [8507b66d-587f-4810-9ad4-8df931c594fe] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.090138612s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-361078 exec mysql-5bb876957f-pbkvh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-361078 exec mysql-5bb876957f-pbkvh -- mysql -ppassword -e "show databases;": exit status 1 (302.1734ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 09:07:51.611258  108373 retry.go:31] will retry after 821.01146ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-361078 exec mysql-5bb876957f-pbkvh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-361078 exec mysql-5bb876957f-pbkvh -- mysql -ppassword -e "show databases;": exit status 1 (166.947609ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 09:07:52.599557  108373 retry.go:31] will retry after 1.929925062s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-361078 exec mysql-5bb876957f-pbkvh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/108373/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /etc/test/nested/copy/108373/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/108373.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /etc/ssl/certs/108373.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/108373.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /usr/share/ca-certificates/108373.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1083732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /etc/ssl/certs/1083732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1083732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /usr/share/ca-certificates/1083732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-361078 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh "sudo systemctl is-active docker": exit status 1 (235.205343ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh "sudo systemctl is-active containerd": exit status 1 (234.156452ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-361078 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-361078 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7cqb7" [5cf29541-99f6-4d14-9547-234e4a74783a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-7cqb7" [5cf29541-99f6-4d14-9547-234e4a74783a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.00412556s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "296.462996ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.507407ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "297.799667ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.281916ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdany-port2659805478/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760778441352329146" to /tmp/TestFunctionalparallelMountCmdany-port2659805478/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760778441352329146" to /tmp/TestFunctionalparallelMountCmdany-port2659805478/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760778441352329146" to /tmp/TestFunctionalparallelMountCmdany-port2659805478/001/test-1760778441352329146
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.678285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:07:21.562312  108373 retry.go:31] will retry after 585.111644ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 test-1760778441352329146
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh cat /mount-9p/test-1760778441352329146
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-361078 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3080b797-d565-4970-848c-00c9cc6bb158] Pending
helpers_test.go:352: "busybox-mount" [3080b797-d565-4970-848c-00c9cc6bb158] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3080b797-d565-4970-848c-00c9cc6bb158] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3080b797-d565-4970-848c-00c9cc6bb158] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004707335s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-361078 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdany-port2659805478/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service list -o json
I1018 09:07:25.681044  108373 detect.go:223] nested VM detected
functional_test.go:1504: Took "293.430241ms" to run "out/minikube-linux-amd64 -p functional-361078 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.22:32343
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.22:32343
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdspecific-port2211367009/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.576972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:07:30.280477  108373 retry.go:31] will retry after 256.130863ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdspecific-port2211367009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh "sudo umount -f /mount-9p": exit status 1 (221.25635ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-361078 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdspecific-port2211367009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-361078 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-361078 /tmp/TestFunctionalparallelMountCmdVerifyCleanup340082964/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361078 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-361078
localhost/kicbase/echo-server:functional-361078
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361078 image ls --format short --alsologtostderr:
I1018 09:07:49.232586  117285 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:49.232893  117285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:49.232905  117285 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:49.232910  117285 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:49.233089  117285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
I1018 09:07:49.233733  117285 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:49.233825  117285 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:49.234256  117285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:49.234325  117285 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:49.248696  117285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
I1018 09:07:49.249236  117285 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:49.249785  117285 main.go:141] libmachine: Using API Version  1
I1018 09:07:49.249807  117285 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:49.250267  117285 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:49.250490  117285 main.go:141] libmachine: (functional-361078) Calling .GetState
I1018 09:07:49.252604  117285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:49.252665  117285 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:49.267168  117285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
I1018 09:07:49.267809  117285 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:49.268513  117285 main.go:141] libmachine: Using API Version  1
I1018 09:07:49.268548  117285 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:49.269005  117285 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:49.269271  117285 main.go:141] libmachine: (functional-361078) Calling .DriverName
I1018 09:07:49.269515  117285 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:49.269656  117285 main.go:141] libmachine: (functional-361078) Calling .GetSSHHostname
I1018 09:07:49.273367  117285 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:49.273865  117285 main.go:141] libmachine: (functional-361078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:d0:d4", ip: ""} in network mk-functional-361078: {Iface:virbr1 ExpiryTime:2025-10-18 10:05:07 +0000 UTC Type:0 Mac:52:54:00:8e:d0:d4 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-361078 Clientid:01:52:54:00:8e:d0:d4}
I1018 09:07:49.273901  117285 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:49.274068  117285 main.go:141] libmachine: (functional-361078) Calling .GetSSHPort
I1018 09:07:49.274307  117285 main.go:141] libmachine: (functional-361078) Calling .GetSSHKeyPath
I1018 09:07:49.274494  117285 main.go:141] libmachine: (functional-361078) Calling .GetSSHUsername
I1018 09:07:49.274709  117285 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/functional-361078/id_rsa Username:docker}
I1018 09:07:49.376729  117285 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 09:07:49.425918  117285 main.go:141] libmachine: Making call to close driver server
I1018 09:07:49.425938  117285 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:49.426309  117285 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:49.426362  117285 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:49.426381  117285 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:49.426397  117285 main.go:141] libmachine: Making call to close driver server
I1018 09:07:49.426410  117285 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:49.426690  117285 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:49.426718  117285 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:49.426737  117285 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361078 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-361078  │ b67793e9feb39 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-361078  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361078 image ls --format table --alsologtostderr:
I1018 09:07:54.923850  117458 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:54.923946  117458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:54.923954  117458 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:54.923958  117458 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:54.924185  117458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
I1018 09:07:54.924787  117458 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:54.924876  117458 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:54.925289  117458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:54.925358  117458 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:54.939457  117458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
I1018 09:07:54.939994  117458 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:54.940571  117458 main.go:141] libmachine: Using API Version  1
I1018 09:07:54.940605  117458 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:54.940980  117458 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:54.941245  117458 main.go:141] libmachine: (functional-361078) Calling .GetState
I1018 09:07:54.943394  117458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:54.943446  117458 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:54.956710  117458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
I1018 09:07:54.957308  117458 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:54.957999  117458 main.go:141] libmachine: Using API Version  1
I1018 09:07:54.958124  117458 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:54.958514  117458 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:54.958732  117458 main.go:141] libmachine: (functional-361078) Calling .DriverName
I1018 09:07:54.959033  117458 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:54.959060  117458 main.go:141] libmachine: (functional-361078) Calling .GetSSHHostname
I1018 09:07:54.962325  117458 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:54.962805  117458 main.go:141] libmachine: (functional-361078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:d0:d4", ip: ""} in network mk-functional-361078: {Iface:virbr1 ExpiryTime:2025-10-18 10:05:07 +0000 UTC Type:0 Mac:52:54:00:8e:d0:d4 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-361078 Clientid:01:52:54:00:8e:d0:d4}
I1018 09:07:54.962823  117458 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:54.963052  117458 main.go:141] libmachine: (functional-361078) Calling .GetSSHPort
I1018 09:07:54.963232  117458 main.go:141] libmachine: (functional-361078) Calling .GetSSHKeyPath
I1018 09:07:54.963419  117458 main.go:141] libmachine: (functional-361078) Calling .GetSSHUsername
I1018 09:07:54.963575  117458 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/functional-361078/id_rsa Username:docker}
I1018 09:07:55.048056  117458 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 09:07:55.089572  117458 main.go:141] libmachine: Making call to close driver server
I1018 09:07:55.089591  117458 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:55.089905  117458 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:55.089926  117458 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:55.089951  117458 main.go:141] libmachine: Making call to close driver server
I1018 09:07:55.089953  117458 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:55.089960  117458 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:55.090361  117458 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:55.090397  117458 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:55.090417  117458 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361078 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890c
f1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e
8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e9
40bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/sto
rage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","loca
lhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-361078"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"b67793e9feb3968678d47ba61091275359952ec65328950ff4210ffca9d0cd50","repoDig
ests":["localhost/minikube-local-cache-test@sha256:50b2ba759fb7d14175fbcecdc73d5edadc0df8fa6e26efa64bbddde55e9af213"],"repoTags":["localhost/minikube-local-cache-test:functional-361078"],"size":"3328"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361078 image ls --format json --alsologtostderr:
I1018 09:07:54.699753  117434 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:54.699856  117434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:54.699867  117434 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:54.699873  117434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:54.700126  117434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
I1018 09:07:54.700775  117434 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:54.700890  117434 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:54.701320  117434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:54.701401  117434 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:54.715640  117434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
I1018 09:07:54.716226  117434 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:54.716845  117434 main.go:141] libmachine: Using API Version  1
I1018 09:07:54.716896  117434 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:54.717306  117434 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:54.717505  117434 main.go:141] libmachine: (functional-361078) Calling .GetState
I1018 09:07:54.719514  117434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:54.719560  117434 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:54.733250  117434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
I1018 09:07:54.733807  117434 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:54.734401  117434 main.go:141] libmachine: Using API Version  1
I1018 09:07:54.734435  117434 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:54.734843  117434 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:54.735047  117434 main.go:141] libmachine: (functional-361078) Calling .DriverName
I1018 09:07:54.735301  117434 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:54.735327  117434 main.go:141] libmachine: (functional-361078) Calling .GetSSHHostname
I1018 09:07:54.738743  117434 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:54.739263  117434 main.go:141] libmachine: (functional-361078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:d0:d4", ip: ""} in network mk-functional-361078: {Iface:virbr1 ExpiryTime:2025-10-18 10:05:07 +0000 UTC Type:0 Mac:52:54:00:8e:d0:d4 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-361078 Clientid:01:52:54:00:8e:d0:d4}
I1018 09:07:54.739293  117434 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:54.739491  117434 main.go:141] libmachine: (functional-361078) Calling .GetSSHPort
I1018 09:07:54.739700  117434 main.go:141] libmachine: (functional-361078) Calling .GetSSHKeyPath
I1018 09:07:54.739854  117434 main.go:141] libmachine: (functional-361078) Calling .GetSSHUsername
I1018 09:07:54.740019  117434 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/functional-361078/id_rsa Username:docker}
I1018 09:07:54.824067  117434 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 09:07:54.870087  117434 main.go:141] libmachine: Making call to close driver server
I1018 09:07:54.870099  117434 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:54.870414  117434 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:54.870430  117434 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:54.870436  117434 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:54.870459  117434 main.go:141] libmachine: Making call to close driver server
I1018 09:07:54.870467  117434 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:54.870724  117434 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:54.870810  117434 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:54.870841  117434 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361078 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-361078
size: "4944818"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b67793e9feb3968678d47ba61091275359952ec65328950ff4210ffca9d0cd50
repoDigests:
- localhost/minikube-local-cache-test@sha256:50b2ba759fb7d14175fbcecdc73d5edadc0df8fa6e26efa64bbddde55e9af213
repoTags:
- localhost/minikube-local-cache-test:functional-361078
size: "3328"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361078 image ls --format yaml --alsologtostderr:
I1018 09:07:49.496564  117309 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:49.496724  117309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:49.496735  117309 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:49.496742  117309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:49.497059  117309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
I1018 09:07:49.497876  117309 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:49.498011  117309 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:49.498676  117309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:49.498753  117309 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:49.512749  117309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
I1018 09:07:49.513387  117309 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:49.514029  117309 main.go:141] libmachine: Using API Version  1
I1018 09:07:49.514070  117309 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:49.514435  117309 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:49.514654  117309 main.go:141] libmachine: (functional-361078) Calling .GetState
I1018 09:07:49.516964  117309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:49.517019  117309 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:49.531078  117309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
I1018 09:07:49.531620  117309 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:49.532253  117309 main.go:141] libmachine: Using API Version  1
I1018 09:07:49.532293  117309 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:49.532703  117309 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:49.532994  117309 main.go:141] libmachine: (functional-361078) Calling .DriverName
I1018 09:07:49.533346  117309 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:49.533374  117309 main.go:141] libmachine: (functional-361078) Calling .GetSSHHostname
I1018 09:07:49.537335  117309 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:49.537988  117309 main.go:141] libmachine: (functional-361078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:d0:d4", ip: ""} in network mk-functional-361078: {Iface:virbr1 ExpiryTime:2025-10-18 10:05:07 +0000 UTC Type:0 Mac:52:54:00:8e:d0:d4 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-361078 Clientid:01:52:54:00:8e:d0:d4}
I1018 09:07:49.538035  117309 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:49.538213  117309 main.go:141] libmachine: (functional-361078) Calling .GetSSHPort
I1018 09:07:49.538429  117309 main.go:141] libmachine: (functional-361078) Calling .GetSSHKeyPath
I1018 09:07:49.538681  117309 main.go:141] libmachine: (functional-361078) Calling .GetSSHUsername
I1018 09:07:49.538837  117309 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/functional-361078/id_rsa Username:docker}
I1018 09:07:49.668316  117309 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 09:07:49.730549  117309 main.go:141] libmachine: Making call to close driver server
I1018 09:07:49.730567  117309 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:49.730924  117309 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:49.730942  117309 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:49.730974  117309 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:49.731011  117309 main.go:141] libmachine: Making call to close driver server
I1018 09:07:49.731037  117309 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:49.731309  117309 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:49.731322  117309 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:49.731350  117309 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-361078 ssh pgrep buildkitd: exit status 1 (234.350234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image build -t localhost/my-image:functional-361078 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 image build -t localhost/my-image:functional-361078 testdata/build --alsologtostderr: (7.47456998s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-361078 image build -t localhost/my-image:functional-361078 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bd95d77f0a0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-361078
--> d1c9d5b23f4
Successfully tagged localhost/my-image:functional-361078
d1c9d5b23f45953c2da58aab1e081070dd29c400d3ed33cf4b8262bf8c41cc16
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-361078 image build -t localhost/my-image:functional-361078 testdata/build --alsologtostderr:
I1018 09:07:50.028809  117364 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:50.029151  117364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:50.029163  117364 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:50.029169  117364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:50.029369  117364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
I1018 09:07:50.030009  117364 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:50.030810  117364 config.go:182] Loaded profile config "functional-361078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:50.031274  117364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:50.031327  117364 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:50.046199  117364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
I1018 09:07:50.046851  117364 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:50.047472  117364 main.go:141] libmachine: Using API Version  1
I1018 09:07:50.047512  117364 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:50.047950  117364 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:50.048191  117364 main.go:141] libmachine: (functional-361078) Calling .GetState
I1018 09:07:50.050088  117364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1018 09:07:50.050161  117364 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 09:07:50.064469  117364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
I1018 09:07:50.065261  117364 main.go:141] libmachine: () Calling .GetVersion
I1018 09:07:50.065893  117364 main.go:141] libmachine: Using API Version  1
I1018 09:07:50.065927  117364 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 09:07:50.066434  117364 main.go:141] libmachine: () Calling .GetMachineName
I1018 09:07:50.066682  117364 main.go:141] libmachine: (functional-361078) Calling .DriverName
I1018 09:07:50.066993  117364 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:50.067038  117364 main.go:141] libmachine: (functional-361078) Calling .GetSSHHostname
I1018 09:07:50.071239  117364 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:50.071792  117364 main.go:141] libmachine: (functional-361078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:d0:d4", ip: ""} in network mk-functional-361078: {Iface:virbr1 ExpiryTime:2025-10-18 10:05:07 +0000 UTC Type:0 Mac:52:54:00:8e:d0:d4 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-361078 Clientid:01:52:54:00:8e:d0:d4}
I1018 09:07:50.071840  117364 main.go:141] libmachine: (functional-361078) DBG | domain functional-361078 has defined IP address 192.168.39.22 and MAC address 52:54:00:8e:d0:d4 in network mk-functional-361078
I1018 09:07:50.072092  117364 main.go:141] libmachine: (functional-361078) Calling .GetSSHPort
I1018 09:07:50.072380  117364 main.go:141] libmachine: (functional-361078) Calling .GetSSHKeyPath
I1018 09:07:50.072605  117364 main.go:141] libmachine: (functional-361078) Calling .GetSSHUsername
I1018 09:07:50.072815  117364 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/functional-361078/id_rsa Username:docker}
I1018 09:07:50.177034  117364 build_images.go:161] Building image from path: /tmp/build.3078727694.tar
I1018 09:07:50.177121  117364 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 09:07:50.216923  117364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3078727694.tar
I1018 09:07:50.230961  117364 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3078727694.tar: stat -c "%s %y" /var/lib/minikube/build/build.3078727694.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3078727694.tar': No such file or directory
I1018 09:07:50.230998  117364 ssh_runner.go:362] scp /tmp/build.3078727694.tar --> /var/lib/minikube/build/build.3078727694.tar (3072 bytes)
I1018 09:07:50.308858  117364 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3078727694
I1018 09:07:50.330331  117364 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3078727694 -xf /var/lib/minikube/build/build.3078727694.tar
I1018 09:07:50.350707  117364 crio.go:315] Building image: /var/lib/minikube/build/build.3078727694
I1018 09:07:50.350798  117364 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-361078 /var/lib/minikube/build/build.3078727694 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 09:07:57.416540  117364 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-361078 /var/lib/minikube/build/build.3078727694 --cgroup-manager=cgroupfs: (7.065709178s)
I1018 09:07:57.416621  117364 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3078727694
I1018 09:07:57.430640  117364 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3078727694.tar
I1018 09:07:57.444245  117364 build_images.go:217] Built localhost/my-image:functional-361078 from /tmp/build.3078727694.tar
I1018 09:07:57.444292  117364 build_images.go:133] succeeded building to: functional-361078
I1018 09:07:57.444296  117364 build_images.go:134] failed building to: 
I1018 09:07:57.444324  117364 main.go:141] libmachine: Making call to close driver server
I1018 09:07:57.444339  117364 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:57.444664  117364 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:57.444686  117364 main.go:141] libmachine: Making call to close connection to plugin binary
I1018 09:07:57.444697  117364 main.go:141] libmachine: Making call to close driver server
I1018 09:07:57.444706  117364 main.go:141] libmachine: (functional-361078) Calling .Close
I1018 09:07:57.444972  117364 main.go:141] libmachine: (functional-361078) DBG | Closing plugin on server side
I1018 09:07:57.444985  117364 main.go:141] libmachine: Successfully made call to close driver server
I1018 09:07:57.444998  117364 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.696827955s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-361078
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image load --daemon kicbase/echo-server:functional-361078 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 image load --daemon kicbase/echo-server:functional-361078 --alsologtostderr: (1.055314236s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image load --daemon kicbase/echo-server:functional-361078 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-361078
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image load --daemon kicbase/echo-server:functional-361078 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 image load --daemon kicbase/echo-server:functional-361078 --alsologtostderr: (3.242878419s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image save kicbase/echo-server:functional-361078 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-361078 image save kicbase/echo-server:functional-361078 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.726824626s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image rm kicbase/echo-server:functional-361078 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-361078
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-361078 image save --daemon kicbase/echo-server:functional-361078 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-361078
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-361078
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-361078
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-361078
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:09:17.228453  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:09:44.944335  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m26.671919352s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (207.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 kubectl -- rollout status deployment/busybox: (5.096684009s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-nx5nb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-qk75z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-xlc2k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-nx5nb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-qk75z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-xlc2k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-nx5nb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-qk75z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-xlc2k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-nx5nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-nx5nb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-qk75z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-qk75z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-xlc2k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 kubectl -- exec busybox-7b57f96db7-xlc2k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node add --alsologtostderr -v 5
E1018 09:12:16.303536  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.310075  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.321612  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.343085  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.384660  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.466212  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.627797  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:16.949399  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:17.591391  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:18.873272  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:21.435390  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:12:26.556679  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 node add --alsologtostderr -v 5: (46.880815373s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-744200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp testdata/cp-test.txt ha-744200:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile477729767/001/cp-test_ha-744200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200:/home/docker/cp-test.txt ha-744200-m02:/home/docker/cp-test_ha-744200_ha-744200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test_ha-744200_ha-744200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200:/home/docker/cp-test.txt ha-744200-m03:/home/docker/cp-test_ha-744200_ha-744200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test_ha-744200_ha-744200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200:/home/docker/cp-test.txt ha-744200-m04:/home/docker/cp-test_ha-744200_ha-744200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test_ha-744200_ha-744200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp testdata/cp-test.txt ha-744200-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile477729767/001/cp-test_ha-744200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m02:/home/docker/cp-test.txt ha-744200:/home/docker/cp-test_ha-744200-m02_ha-744200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test_ha-744200-m02_ha-744200.txt"
E1018 09:12:36.798577  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m02:/home/docker/cp-test.txt ha-744200-m03:/home/docker/cp-test_ha-744200-m02_ha-744200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test_ha-744200-m02_ha-744200-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m02:/home/docker/cp-test.txt ha-744200-m04:/home/docker/cp-test_ha-744200-m02_ha-744200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test_ha-744200-m02_ha-744200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp testdata/cp-test.txt ha-744200-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile477729767/001/cp-test_ha-744200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m03:/home/docker/cp-test.txt ha-744200:/home/docker/cp-test_ha-744200-m03_ha-744200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test_ha-744200-m03_ha-744200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m03:/home/docker/cp-test.txt ha-744200-m02:/home/docker/cp-test_ha-744200-m03_ha-744200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test_ha-744200-m03_ha-744200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m03:/home/docker/cp-test.txt ha-744200-m04:/home/docker/cp-test_ha-744200-m03_ha-744200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test_ha-744200-m03_ha-744200-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp testdata/cp-test.txt ha-744200-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile477729767/001/cp-test_ha-744200-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m04:/home/docker/cp-test.txt ha-744200:/home/docker/cp-test_ha-744200-m04_ha-744200.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200 "sudo cat /home/docker/cp-test_ha-744200-m04_ha-744200.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m04:/home/docker/cp-test.txt ha-744200-m02:/home/docker/cp-test_ha-744200-m04_ha-744200-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m02 "sudo cat /home/docker/cp-test_ha-744200-m04_ha-744200-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 cp ha-744200-m04:/home/docker/cp-test.txt ha-744200-m03:/home/docker/cp-test_ha-744200-m04_ha-744200-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 ssh -n ha-744200-m03 "sudo cat /home/docker/cp-test_ha-744200-m04_ha-744200-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node stop m02 --alsologtostderr -v 5
E1018 09:12:57.280451  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:13:38.242573  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 node stop m02 --alsologtostderr -v 5: (1m23.590495559s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5: exit status 7 (697.446314ms)

                                                
                                                
-- stdout --
	ha-744200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:14:08.403393  122055 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:14:08.403664  122055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:14:08.403673  122055 out.go:374] Setting ErrFile to fd 2...
	I1018 09:14:08.403678  122055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:14:08.403957  122055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:14:08.404229  122055 out.go:368] Setting JSON to false
	I1018 09:14:08.404267  122055 mustload.go:65] Loading cluster: ha-744200
	I1018 09:14:08.404331  122055 notify.go:220] Checking for updates...
	I1018 09:14:08.404708  122055 config.go:182] Loaded profile config "ha-744200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:14:08.404727  122055 status.go:174] checking status of ha-744200 ...
	I1018 09:14:08.405192  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.405248  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.419219  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I1018 09:14:08.419695  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.420313  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.420340  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.420737  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.420972  122055 main.go:141] libmachine: (ha-744200) Calling .GetState
	I1018 09:14:08.423100  122055 status.go:371] ha-744200 host status = "Running" (err=<nil>)
	I1018 09:14:08.423126  122055 host.go:66] Checking if "ha-744200" exists ...
	I1018 09:14:08.423608  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.423663  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.438746  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I1018 09:14:08.439381  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.439863  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.439887  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.440242  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.440491  122055 main.go:141] libmachine: (ha-744200) Calling .GetIP
	I1018 09:14:08.444389  122055 main.go:141] libmachine: (ha-744200) DBG | domain ha-744200 has defined MAC address 52:54:00:18:42:ce in network mk-ha-744200
	I1018 09:14:08.444932  122055 main.go:141] libmachine: (ha-744200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:42:ce", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:08:21 +0000 UTC Type:0 Mac:52:54:00:18:42:ce Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-744200 Clientid:01:52:54:00:18:42:ce}
	I1018 09:14:08.444966  122055 main.go:141] libmachine: (ha-744200) DBG | domain ha-744200 has defined IP address 192.168.39.12 and MAC address 52:54:00:18:42:ce in network mk-ha-744200
	I1018 09:14:08.445172  122055 host.go:66] Checking if "ha-744200" exists ...
	I1018 09:14:08.445644  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.445695  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.460271  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1018 09:14:08.460835  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.461393  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.461417  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.461761  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.461968  122055 main.go:141] libmachine: (ha-744200) Calling .DriverName
	I1018 09:14:08.462164  122055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:14:08.462207  122055 main.go:141] libmachine: (ha-744200) Calling .GetSSHHostname
	I1018 09:14:08.465743  122055 main.go:141] libmachine: (ha-744200) DBG | domain ha-744200 has defined MAC address 52:54:00:18:42:ce in network mk-ha-744200
	I1018 09:14:08.466284  122055 main.go:141] libmachine: (ha-744200) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:42:ce", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:08:21 +0000 UTC Type:0 Mac:52:54:00:18:42:ce Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-744200 Clientid:01:52:54:00:18:42:ce}
	I1018 09:14:08.466320  122055 main.go:141] libmachine: (ha-744200) DBG | domain ha-744200 has defined IP address 192.168.39.12 and MAC address 52:54:00:18:42:ce in network mk-ha-744200
	I1018 09:14:08.466508  122055 main.go:141] libmachine: (ha-744200) Calling .GetSSHPort
	I1018 09:14:08.466691  122055 main.go:141] libmachine: (ha-744200) Calling .GetSSHKeyPath
	I1018 09:14:08.466850  122055 main.go:141] libmachine: (ha-744200) Calling .GetSSHUsername
	I1018 09:14:08.467001  122055 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/ha-744200/id_rsa Username:docker}
	I1018 09:14:08.560036  122055 ssh_runner.go:195] Run: systemctl --version
	I1018 09:14:08.567529  122055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:14:08.586761  122055 kubeconfig.go:125] found "ha-744200" server: "https://192.168.39.254:8443"
	I1018 09:14:08.586799  122055 api_server.go:166] Checking apiserver status ...
	I1018 09:14:08.586832  122055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:14:08.608263  122055 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W1018 09:14:08.621670  122055 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:14:08.621729  122055 ssh_runner.go:195] Run: ls
	I1018 09:14:08.627098  122055 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 09:14:08.633751  122055 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 09:14:08.633776  122055 status.go:463] ha-744200 apiserver status = Running (err=<nil>)
	I1018 09:14:08.633785  122055 status.go:176] ha-744200 status: &{Name:ha-744200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:14:08.633803  122055 status.go:174] checking status of ha-744200-m02 ...
	I1018 09:14:08.634106  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.634168  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.647966  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I1018 09:14:08.648559  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.649150  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.649177  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.649608  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.649906  122055 main.go:141] libmachine: (ha-744200-m02) Calling .GetState
	I1018 09:14:08.652038  122055 status.go:371] ha-744200-m02 host status = "Stopped" (err=<nil>)
	I1018 09:14:08.652057  122055 status.go:384] host is not running, skipping remaining checks
	I1018 09:14:08.652063  122055 status.go:176] ha-744200-m02 status: &{Name:ha-744200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:14:08.652081  122055 status.go:174] checking status of ha-744200-m03 ...
	I1018 09:14:08.652502  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.652559  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.668242  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
	I1018 09:14:08.668767  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.669205  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.669225  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.669712  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.669995  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetState
	I1018 09:14:08.672090  122055 status.go:371] ha-744200-m03 host status = "Running" (err=<nil>)
	I1018 09:14:08.672108  122055 host.go:66] Checking if "ha-744200-m03" exists ...
	I1018 09:14:08.672415  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.672455  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.687179  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38071
	I1018 09:14:08.687738  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.688257  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.688281  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.688669  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.688950  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetIP
	I1018 09:14:08.692209  122055 main.go:141] libmachine: (ha-744200-m03) DBG | domain ha-744200-m03 has defined MAC address 52:54:00:f6:77:5f in network mk-ha-744200
	I1018 09:14:08.692836  122055 main.go:141] libmachine: (ha-744200-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:77:5f", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:10:18 +0000 UTC Type:0 Mac:52:54:00:f6:77:5f Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-744200-m03 Clientid:01:52:54:00:f6:77:5f}
	I1018 09:14:08.692860  122055 main.go:141] libmachine: (ha-744200-m03) DBG | domain ha-744200-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:f6:77:5f in network mk-ha-744200
	I1018 09:14:08.693091  122055 host.go:66] Checking if "ha-744200-m03" exists ...
	I1018 09:14:08.693426  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.693472  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.708714  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I1018 09:14:08.709307  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.709797  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.709836  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.710251  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.710505  122055 main.go:141] libmachine: (ha-744200-m03) Calling .DriverName
	I1018 09:14:08.710770  122055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:14:08.710802  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetSSHHostname
	I1018 09:14:08.714974  122055 main.go:141] libmachine: (ha-744200-m03) DBG | domain ha-744200-m03 has defined MAC address 52:54:00:f6:77:5f in network mk-ha-744200
	I1018 09:14:08.715511  122055 main.go:141] libmachine: (ha-744200-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:77:5f", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:10:18 +0000 UTC Type:0 Mac:52:54:00:f6:77:5f Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-744200-m03 Clientid:01:52:54:00:f6:77:5f}
	I1018 09:14:08.715545  122055 main.go:141] libmachine: (ha-744200-m03) DBG | domain ha-744200-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:f6:77:5f in network mk-ha-744200
	I1018 09:14:08.715716  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetSSHPort
	I1018 09:14:08.715933  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetSSHKeyPath
	I1018 09:14:08.716100  122055 main.go:141] libmachine: (ha-744200-m03) Calling .GetSSHUsername
	I1018 09:14:08.716256  122055 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/ha-744200-m03/id_rsa Username:docker}
	I1018 09:14:08.806785  122055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:14:08.826398  122055 kubeconfig.go:125] found "ha-744200" server: "https://192.168.39.254:8443"
	I1018 09:14:08.826432  122055 api_server.go:166] Checking apiserver status ...
	I1018 09:14:08.826497  122055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:14:08.849031  122055 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup
	W1018 09:14:08.863434  122055 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:14:08.863502  122055 ssh_runner.go:195] Run: ls
	I1018 09:14:08.869791  122055 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1018 09:14:08.874772  122055 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1018 09:14:08.874804  122055 status.go:463] ha-744200-m03 apiserver status = Running (err=<nil>)
	I1018 09:14:08.874815  122055 status.go:176] ha-744200-m03 status: &{Name:ha-744200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:14:08.874837  122055 status.go:174] checking status of ha-744200-m04 ...
	I1018 09:14:08.875125  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.875189  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.889521  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I1018 09:14:08.890065  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.890587  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.890615  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.891058  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.891262  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetState
	I1018 09:14:08.893427  122055 status.go:371] ha-744200-m04 host status = "Running" (err=<nil>)
	I1018 09:14:08.893447  122055 host.go:66] Checking if "ha-744200-m04" exists ...
	I1018 09:14:08.893844  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.893888  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.907938  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I1018 09:14:08.908557  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.909165  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.909198  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.909611  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.909814  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetIP
	I1018 09:14:08.912925  122055 main.go:141] libmachine: (ha-744200-m04) DBG | domain ha-744200-m04 has defined MAC address 52:54:00:40:10:ea in network mk-ha-744200
	I1018 09:14:08.913502  122055 main.go:141] libmachine: (ha-744200-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:10:ea", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:11:59 +0000 UTC Type:0 Mac:52:54:00:40:10:ea Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-744200-m04 Clientid:01:52:54:00:40:10:ea}
	I1018 09:14:08.913532  122055 main.go:141] libmachine: (ha-744200-m04) DBG | domain ha-744200-m04 has defined IP address 192.168.39.122 and MAC address 52:54:00:40:10:ea in network mk-ha-744200
	I1018 09:14:08.913791  122055 host.go:66] Checking if "ha-744200-m04" exists ...
	I1018 09:14:08.914195  122055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:14:08.914255  122055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:14:08.930075  122055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I1018 09:14:08.930705  122055 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:14:08.931265  122055 main.go:141] libmachine: Using API Version  1
	I1018 09:14:08.931294  122055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:14:08.931783  122055 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:14:08.932020  122055 main.go:141] libmachine: (ha-744200-m04) Calling .DriverName
	I1018 09:14:08.932223  122055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:14:08.932246  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetSSHHostname
	I1018 09:14:08.935631  122055 main.go:141] libmachine: (ha-744200-m04) DBG | domain ha-744200-m04 has defined MAC address 52:54:00:40:10:ea in network mk-ha-744200
	I1018 09:14:08.936198  122055 main.go:141] libmachine: (ha-744200-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:10:ea", ip: ""} in network mk-ha-744200: {Iface:virbr1 ExpiryTime:2025-10-18 10:11:59 +0000 UTC Type:0 Mac:52:54:00:40:10:ea Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-744200-m04 Clientid:01:52:54:00:40:10:ea}
	I1018 09:14:08.936236  122055 main.go:141] libmachine: (ha-744200-m04) DBG | domain ha-744200-m04 has defined IP address 192.168.39.122 and MAC address 52:54:00:40:10:ea in network mk-ha-744200
	I1018 09:14:08.936396  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetSSHPort
	I1018 09:14:08.936641  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetSSHKeyPath
	I1018 09:14:08.936832  122055 main.go:141] libmachine: (ha-744200-m04) Calling .GetSSHUsername
	I1018 09:14:08.937005  122055 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/ha-744200-m04/id_rsa Username:docker}
	I1018 09:14:09.023248  122055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:14:09.041927  122055 status.go:176] ha-744200-m04 status: &{Name:ha-744200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node start m02 --alsologtostderr -v 5
E1018 09:14:17.229063  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 node start m02 --alsologtostderr -v 5: (36.619162398s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5: (1.091059573s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.050946981s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 stop --alsologtostderr -v 5
E1018 09:15:00.164372  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:16.304070  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:17:44.006068  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 stop --alsologtostderr -v 5: (4m7.949930303s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 start --wait true --alsologtostderr -v 5
E1018 09:19:17.228706  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:20:40.308423  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 start --wait true --alsologtostderr -v 5: (1m58.689475907s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 node delete m03 --alsologtostderr -v 5: (18.530827754s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (250.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 stop --alsologtostderr -v 5
E1018 09:22:16.308833  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:24:17.229351  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 stop --alsologtostderr -v 5: (4m10.503241385s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5: exit status 7 (114.973206ms)

                                                
                                                
-- stdout --
	ha-744200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:25:25.953890  126349 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:25:25.954153  126349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:25:25.954164  126349 out.go:374] Setting ErrFile to fd 2...
	I1018 09:25:25.954170  126349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:25:25.954409  126349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:25:25.954589  126349 out.go:368] Setting JSON to false
	I1018 09:25:25.954617  126349 mustload.go:65] Loading cluster: ha-744200
	I1018 09:25:25.954683  126349 notify.go:220] Checking for updates...
	I1018 09:25:25.955007  126349 config.go:182] Loaded profile config "ha-744200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:25:25.955023  126349 status.go:174] checking status of ha-744200 ...
	I1018 09:25:25.955479  126349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:25:25.955517  126349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:25:25.978216  126349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
	I1018 09:25:25.978842  126349 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:25:25.979456  126349 main.go:141] libmachine: Using API Version  1
	I1018 09:25:25.979486  126349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:25:25.979923  126349 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:25:25.980156  126349 main.go:141] libmachine: (ha-744200) Calling .GetState
	I1018 09:25:25.982170  126349 status.go:371] ha-744200 host status = "Stopped" (err=<nil>)
	I1018 09:25:25.982184  126349 status.go:384] host is not running, skipping remaining checks
	I1018 09:25:25.982189  126349 status.go:176] ha-744200 status: &{Name:ha-744200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:25:25.982207  126349 status.go:174] checking status of ha-744200-m02 ...
	I1018 09:25:25.982483  126349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:25:25.982541  126349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:25:25.995970  126349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I1018 09:25:25.996548  126349 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:25:25.997082  126349 main.go:141] libmachine: Using API Version  1
	I1018 09:25:25.997112  126349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:25:25.997579  126349 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:25:25.997813  126349 main.go:141] libmachine: (ha-744200-m02) Calling .GetState
	I1018 09:25:25.999818  126349 status.go:371] ha-744200-m02 host status = "Stopped" (err=<nil>)
	I1018 09:25:25.999832  126349 status.go:384] host is not running, skipping remaining checks
	I1018 09:25:25.999837  126349 status.go:176] ha-744200-m02 status: &{Name:ha-744200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:25:25.999861  126349 status.go:174] checking status of ha-744200-m04 ...
	I1018 09:25:26.000206  126349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:25:26.000260  126349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:25:26.013881  126349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I1018 09:25:26.014473  126349 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:25:26.015058  126349 main.go:141] libmachine: Using API Version  1
	I1018 09:25:26.015089  126349 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:25:26.015505  126349 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:25:26.015717  126349 main.go:141] libmachine: (ha-744200-m04) Calling .GetState
	I1018 09:25:26.017498  126349 status.go:371] ha-744200-m04 host status = "Stopped" (err=<nil>)
	I1018 09:25:26.017512  126349 status.go:384] host is not running, skipping remaining checks
	I1018 09:25:26.017517  126349 status.go:176] ha-744200-m04 status: &{Name:ha-744200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (250.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (108.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m47.915556054s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (108.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 node add --control-plane --alsologtostderr -v 5
E1018 09:27:16.304164  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:28:39.368324  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-744200 node add --control-plane --alsologtostderr -v 5: (1m24.782381259s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-744200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-447135 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:29:17.228467  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-447135 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.283412106s)
--- PASS: TestJSONOutput/start/Command (53.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-447135 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-447135 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-447135 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-447135 --output=json --user=testUser: (8.047224567s)
--- PASS: TestJSONOutput/stop/Command (8.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-291457 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-291457 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.885673ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e50420fe-1388-453b-8520-f6027d9fda9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-291457] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76acdf26-d905-4d1f-8904-39743b41ca66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21764"}}
	{"specversion":"1.0","id":"0ea0cc80-e523-4e31-b874-093b0dbb6b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f2d5ec8-4b66-4320-bff2-19ae2e5fa08c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig"}}
	{"specversion":"1.0","id":"a1ac523f-dbed-4e28-8028-952f69d73eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube"}}
	{"specversion":"1.0","id":"1edd4515-f939-49bb-b161-9e629bf89e85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e8594f21-8d0e-417f-8e2f-d33207078076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"31e37029-56f7-4e2a-a4ed-8f0727db2fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-291457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-291457
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-219131 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-219131 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.654881237s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-221496 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-221496 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.68473303s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-219131
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-221496
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-221496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-221496
helpers_test.go:175: Cleaning up "first-219131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-219131
--- PASS: TestMinikubeProfile (85.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-957260 --memory=3072 --mount-string /tmp/TestMountStartserial4197626180/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-957260 --memory=3072 --mount-string /tmp/TestMountStartserial4197626180/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.487688336s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-957260 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-957260 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971906 --memory=3072 --mount-string /tmp/TestMountStartserial4197626180/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971906 --memory=3072 --mount-string /tmp/TestMountStartserial4197626180/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.992874101s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-957260 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-971906
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-971906: (1.261246782s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971906
E1018 09:32:16.304062  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971906: (19.119706839s)
--- PASS: TestMountStart/serial/RestartStopped (20.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971906 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-670094 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-670094 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.973534393s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-670094 -- rollout status deployment/busybox: (3.9092102s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-2pl7s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-4krm5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-2pl7s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-4krm5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-2pl7s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-4krm5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-2pl7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-2pl7s -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-4krm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-670094 -- exec busybox-7b57f96db7-4krm5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-670094 -v=5 --alsologtostderr
E1018 09:34:17.228956  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-670094 -v=5 --alsologtostderr: (41.398541682s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-670094 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp testdata/cp-test.txt multinode-670094:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3368024400/001/cp-test_multinode-670094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094:/home/docker/cp-test.txt multinode-670094-m02:/home/docker/cp-test_multinode-670094_multinode-670094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test_multinode-670094_multinode-670094-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094:/home/docker/cp-test.txt multinode-670094-m03:/home/docker/cp-test_multinode-670094_multinode-670094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test_multinode-670094_multinode-670094-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp testdata/cp-test.txt multinode-670094-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3368024400/001/cp-test_multinode-670094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m02:/home/docker/cp-test.txt multinode-670094:/home/docker/cp-test_multinode-670094-m02_multinode-670094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test_multinode-670094-m02_multinode-670094.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m02:/home/docker/cp-test.txt multinode-670094-m03:/home/docker/cp-test_multinode-670094-m02_multinode-670094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test_multinode-670094-m02_multinode-670094-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp testdata/cp-test.txt multinode-670094-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3368024400/001/cp-test_multinode-670094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m03:/home/docker/cp-test.txt multinode-670094:/home/docker/cp-test_multinode-670094-m03_multinode-670094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094 "sudo cat /home/docker/cp-test_multinode-670094-m03_multinode-670094.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 cp multinode-670094-m03:/home/docker/cp-test.txt multinode-670094-m02:/home/docker/cp-test_multinode-670094-m03_multinode-670094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 ssh -n multinode-670094-m02 "sudo cat /home/docker/cp-test_multinode-670094-m03_multinode-670094-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-670094 node stop m03: (1.742490587s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-670094 status: exit status 7 (443.126109ms)

                                                
                                                
-- stdout --
	multinode-670094
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-670094-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-670094-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr: exit status 7 (451.483875ms)

                                                
                                                
-- stdout --
	multinode-670094
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-670094-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-670094-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:56.313540  133910 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:56.313840  133910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:56.313851  133910 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:56.313856  133910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:56.314061  133910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:34:56.314254  133910 out.go:368] Setting JSON to false
	I1018 09:34:56.314285  133910 mustload.go:65] Loading cluster: multinode-670094
	I1018 09:34:56.314410  133910 notify.go:220] Checking for updates...
	I1018 09:34:56.314708  133910 config.go:182] Loaded profile config "multinode-670094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:56.314727  133910 status.go:174] checking status of multinode-670094 ...
	I1018 09:34:56.315265  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.315315  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.330061  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I1018 09:34:56.330573  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.331102  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.331131  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.331502  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.331729  133910 main.go:141] libmachine: (multinode-670094) Calling .GetState
	I1018 09:34:56.333654  133910 status.go:371] multinode-670094 host status = "Running" (err=<nil>)
	I1018 09:34:56.333672  133910 host.go:66] Checking if "multinode-670094" exists ...
	I1018 09:34:56.333982  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.334049  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.348552  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I1018 09:34:56.349097  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.349581  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.349607  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.349985  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.350282  133910 main.go:141] libmachine: (multinode-670094) Calling .GetIP
	I1018 09:34:56.353507  133910 main.go:141] libmachine: (multinode-670094) DBG | domain multinode-670094 has defined MAC address 52:54:00:6a:32:84 in network mk-multinode-670094
	I1018 09:34:56.354035  133910 main.go:141] libmachine: (multinode-670094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:32:84", ip: ""} in network mk-multinode-670094: {Iface:virbr1 ExpiryTime:2025-10-18 10:32:37 +0000 UTC Type:0 Mac:52:54:00:6a:32:84 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-670094 Clientid:01:52:54:00:6a:32:84}
	I1018 09:34:56.354052  133910 main.go:141] libmachine: (multinode-670094) DBG | domain multinode-670094 has defined IP address 192.168.39.60 and MAC address 52:54:00:6a:32:84 in network mk-multinode-670094
	I1018 09:34:56.354304  133910 host.go:66] Checking if "multinode-670094" exists ...
	I1018 09:34:56.354630  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.354704  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.368908  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I1018 09:34:56.369431  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.369890  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.369911  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.370367  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.370575  133910 main.go:141] libmachine: (multinode-670094) Calling .DriverName
	I1018 09:34:56.370771  133910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:34:56.370812  133910 main.go:141] libmachine: (multinode-670094) Calling .GetSSHHostname
	I1018 09:34:56.374579  133910 main.go:141] libmachine: (multinode-670094) DBG | domain multinode-670094 has defined MAC address 52:54:00:6a:32:84 in network mk-multinode-670094
	I1018 09:34:56.375131  133910 main.go:141] libmachine: (multinode-670094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:32:84", ip: ""} in network mk-multinode-670094: {Iface:virbr1 ExpiryTime:2025-10-18 10:32:37 +0000 UTC Type:0 Mac:52:54:00:6a:32:84 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-670094 Clientid:01:52:54:00:6a:32:84}
	I1018 09:34:56.375183  133910 main.go:141] libmachine: (multinode-670094) DBG | domain multinode-670094 has defined IP address 192.168.39.60 and MAC address 52:54:00:6a:32:84 in network mk-multinode-670094
	I1018 09:34:56.375344  133910 main.go:141] libmachine: (multinode-670094) Calling .GetSSHPort
	I1018 09:34:56.375519  133910 main.go:141] libmachine: (multinode-670094) Calling .GetSSHKeyPath
	I1018 09:34:56.375661  133910 main.go:141] libmachine: (multinode-670094) Calling .GetSSHUsername
	I1018 09:34:56.375795  133910 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/multinode-670094/id_rsa Username:docker}
	I1018 09:34:56.459865  133910 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:56.465935  133910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:56.482955  133910 kubeconfig.go:125] found "multinode-670094" server: "https://192.168.39.60:8443"
	I1018 09:34:56.482998  133910 api_server.go:166] Checking apiserver status ...
	I1018 09:34:56.483041  133910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:34:56.503155  133910 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1394/cgroup
	W1018 09:34:56.514352  133910 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1394/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:34:56.514418  133910 ssh_runner.go:195] Run: ls
	I1018 09:34:56.519446  133910 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I1018 09:34:56.524133  133910 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I1018 09:34:56.524173  133910 status.go:463] multinode-670094 apiserver status = Running (err=<nil>)
	I1018 09:34:56.524186  133910 status.go:176] multinode-670094 status: &{Name:multinode-670094 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:34:56.524210  133910 status.go:174] checking status of multinode-670094-m02 ...
	I1018 09:34:56.524512  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.524559  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.538535  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I1018 09:34:56.539067  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.539529  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.539550  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.539869  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.540083  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetState
	I1018 09:34:56.541951  133910 status.go:371] multinode-670094-m02 host status = "Running" (err=<nil>)
	I1018 09:34:56.541972  133910 host.go:66] Checking if "multinode-670094-m02" exists ...
	I1018 09:34:56.542326  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.542370  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.558798  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I1018 09:34:56.559311  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.559799  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.559820  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.560212  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.560425  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetIP
	I1018 09:34:56.563878  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | domain multinode-670094-m02 has defined MAC address 52:54:00:56:cd:31 in network mk-multinode-670094
	I1018 09:34:56.564399  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:cd:31", ip: ""} in network mk-multinode-670094: {Iface:virbr1 ExpiryTime:2025-10-18 10:33:30 +0000 UTC Type:0 Mac:52:54:00:56:cd:31 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-670094-m02 Clientid:01:52:54:00:56:cd:31}
	I1018 09:34:56.564421  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | domain multinode-670094-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:56:cd:31 in network mk-multinode-670094
	I1018 09:34:56.564645  133910 host.go:66] Checking if "multinode-670094-m02" exists ...
	I1018 09:34:56.564934  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.564969  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.580013  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I1018 09:34:56.580535  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.581046  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.581075  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.581468  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.581710  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .DriverName
	I1018 09:34:56.581922  133910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:34:56.581948  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetSSHHostname
	I1018 09:34:56.585216  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | domain multinode-670094-m02 has defined MAC address 52:54:00:56:cd:31 in network mk-multinode-670094
	I1018 09:34:56.585634  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:cd:31", ip: ""} in network mk-multinode-670094: {Iface:virbr1 ExpiryTime:2025-10-18 10:33:30 +0000 UTC Type:0 Mac:52:54:00:56:cd:31 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-670094-m02 Clientid:01:52:54:00:56:cd:31}
	I1018 09:34:56.585679  133910 main.go:141] libmachine: (multinode-670094-m02) DBG | domain multinode-670094-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:56:cd:31 in network mk-multinode-670094
	I1018 09:34:56.585871  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetSSHPort
	I1018 09:34:56.586070  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetSSHKeyPath
	I1018 09:34:56.586245  133910 main.go:141] libmachine: (multinode-670094-m02) Calling .GetSSHUsername
	I1018 09:34:56.586400  133910 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21764-104457/.minikube/machines/multinode-670094-m02/id_rsa Username:docker}
	I1018 09:34:56.671466  133910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:56.694271  133910 status.go:176] multinode-670094-m02 status: &{Name:multinode-670094-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:34:56.694326  133910 status.go:174] checking status of multinode-670094-m03 ...
	I1018 09:34:56.694617  133910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:34:56.694664  133910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:34:56.710637  133910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1018 09:34:56.711134  133910 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:34:56.711563  133910 main.go:141] libmachine: Using API Version  1
	I1018 09:34:56.711586  133910 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:34:56.712027  133910 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:34:56.712295  133910 main.go:141] libmachine: (multinode-670094-m03) Calling .GetState
	I1018 09:34:56.714212  133910 status.go:371] multinode-670094-m03 host status = "Stopped" (err=<nil>)
	I1018 09:34:56.714230  133910 status.go:384] host is not running, skipping remaining checks
	I1018 09:34:56.714237  133910 status.go:176] multinode-670094-m03 status: &{Name:multinode-670094-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-670094 node start m03 -v=5 --alsologtostderr: (38.618647356s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-670094
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-670094
E1018 09:37:16.312261  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:20.312724  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-670094: (2m51.418650931s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-670094 --wait=true -v=5 --alsologtostderr
E1018 09:39:17.228658  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-670094 --wait=true -v=5 --alsologtostderr: (2m4.970066806s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-670094
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-670094 node delete m03: (2.185866738s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 stop
E1018 09:42:16.311929  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-670094 stop: (2m25.408207254s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-670094 status: exit status 7 (96.095218ms)

                                                
                                                
-- stdout --
	multinode-670094
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-670094-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr: exit status 7 (86.625452ms)

                                                
                                                
-- stdout --
	multinode-670094
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-670094-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:43:00.812173  136598 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:43:00.812299  136598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:00.812307  136598 out.go:374] Setting ErrFile to fd 2...
	I1018 09:43:00.812316  136598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:00.812546  136598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:43:00.812758  136598 out.go:368] Setting JSON to false
	I1018 09:43:00.812804  136598 mustload.go:65] Loading cluster: multinode-670094
	I1018 09:43:00.812987  136598 notify.go:220] Checking for updates...
	I1018 09:43:00.813330  136598 config.go:182] Loaded profile config "multinode-670094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:00.813351  136598 status.go:174] checking status of multinode-670094 ...
	I1018 09:43:00.813832  136598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:43:00.813883  136598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:43:00.827800  136598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I1018 09:43:00.828384  136598 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:43:00.828996  136598 main.go:141] libmachine: Using API Version  1
	I1018 09:43:00.829028  136598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:43:00.829456  136598 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:43:00.829661  136598 main.go:141] libmachine: (multinode-670094) Calling .GetState
	I1018 09:43:00.831283  136598 status.go:371] multinode-670094 host status = "Stopped" (err=<nil>)
	I1018 09:43:00.831310  136598 status.go:384] host is not running, skipping remaining checks
	I1018 09:43:00.831316  136598 status.go:176] multinode-670094 status: &{Name:multinode-670094 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:43:00.831348  136598 status.go:174] checking status of multinode-670094-m02 ...
	I1018 09:43:00.831634  136598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1018 09:43:00.831669  136598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1018 09:43:00.845329  136598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1018 09:43:00.845846  136598 main.go:141] libmachine: () Calling .GetVersion
	I1018 09:43:00.846289  136598 main.go:141] libmachine: Using API Version  1
	I1018 09:43:00.846311  136598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1018 09:43:00.846715  136598 main.go:141] libmachine: () Calling .GetMachineName
	I1018 09:43:00.846953  136598 main.go:141] libmachine: (multinode-670094-m02) Calling .GetState
	I1018 09:43:00.848809  136598 status.go:371] multinode-670094-m02 host status = "Stopped" (err=<nil>)
	I1018 09:43:00.848822  136598 status.go:384] host is not running, skipping remaining checks
	I1018 09:43:00.848827  136598 status.go:176] multinode-670094-m02 status: &{Name:multinode-670094-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (145.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-670094 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:44:17.229299  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-670094 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.915201522s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-670094 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-670094
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-670094-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-670094-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (66.929615ms)

                                                
                                                
-- stdout --
	* [multinode-670094-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-670094-m02' is duplicated with machine name 'multinode-670094-m02' in profile 'multinode-670094'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-670094-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-670094-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.383240428s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-670094
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-670094: exit status 80 (238.307629ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-670094 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-670094-m03 already exists in multinode-670094-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-670094-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.63s)

                                                
                                    
x
+
TestScheduledStopUnix (109.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-459640 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-459640 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.739387112s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459640 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-459640 -n scheduled-stop-459640
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 09:48:00.237732  108373 retry.go:31] will retry after 85.878µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.238921  108373 retry.go:31] will retry after 211.968µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.240091  108373 retry.go:31] will retry after 305.132µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.241208  108373 retry.go:31] will retry after 406.847µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.242358  108373 retry.go:31] will retry after 561.418µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.243501  108373 retry.go:31] will retry after 502.637µs: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.244663  108373 retry.go:31] will retry after 1.607646ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.246891  108373 retry.go:31] will retry after 1.162409ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.249096  108373 retry.go:31] will retry after 2.285265ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.252289  108373 retry.go:31] will retry after 2.66523ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.255488  108373 retry.go:31] will retry after 3.327474ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.258892  108373 retry.go:31] will retry after 8.218697ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.268201  108373 retry.go:31] will retry after 9.268283ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.278489  108373 retry.go:31] will retry after 14.256021ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.293829  108373 retry.go:31] will retry after 22.328336ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
I1018 09:48:00.316470  108373 retry.go:31] will retry after 33.559415ms: open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/scheduled-stop-459640/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459640 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459640 -n scheduled-stop-459640
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459640
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459640 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459640
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-459640: exit status 7 (78.329002ms)

                                                
                                                
-- stdout --
	scheduled-stop-459640
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459640 -n scheduled-stop-459640
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459640 -n scheduled-stop-459640: exit status 7 (67.384325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-459640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-459640
--- PASS: TestScheduledStopUnix (109.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (144.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3919529269 start -p running-upgrade-553587 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3919529269 start -p running-upgrade-553587 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.323656497s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-553587 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-553587 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.603914608s)
helpers_test.go:175: Cleaning up "running-upgrade-553587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-553587
--- PASS: TestRunningBinaryUpgrade (144.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (131.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.33255223s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-689545
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-689545: (1.859379878s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-689545 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-689545 status --format={{.Host}}: exit status 7 (76.500768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.11873312s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-689545 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (97.008754ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-689545] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-689545
	    minikube start -p kubernetes-upgrade-689545 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6895452 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-689545 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-689545 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.361486327s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-689545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-689545
--- PASS: TestKubernetesUpgrade (131.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (96.866395ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-385311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385311 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:49:17.228440  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385311 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.743640033s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-385311 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (32.860394147s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-385311 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-385311 status -o json: exit status 2 (261.866463ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-385311","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-385311
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385311 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (34.66644112s)
--- PASS: TestNoKubernetes/serial/Start (34.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-385311 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-385311 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.152135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.411116931s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-385311
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-385311: (1.429247775s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-385311 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 09:52:16.303753  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-385311 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (33.896500858s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.158745088 start -p stopped-upgrade-461592 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.158745088 start -p stopped-upgrade-461592 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.969268536s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.158745088 -p stopped-upgrade-461592 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.158745088 -p stopped-upgrade-461592 stop: (1.815439742s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-461592 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-461592 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.900928647s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-385311 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-385311 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.946953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-882442 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-882442 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (113.568216ms)

                                                
                                                
-- stdout --
	* [false-882442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:52:49.915759  144622 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:52:49.916038  144622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:52:49.916049  144622 out.go:374] Setting ErrFile to fd 2...
	I1018 09:52:49.916056  144622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:52:49.916265  144622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-104457/.minikube/bin
	I1018 09:52:49.916813  144622 out.go:368] Setting JSON to false
	I1018 09:52:49.917854  144622 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5710,"bootTime":1760775460,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:52:49.917959  144622 start.go:141] virtualization: kvm guest
	I1018 09:52:49.920205  144622 out.go:179] * [false-882442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:52:49.921523  144622 notify.go:220] Checking for updates...
	I1018 09:52:49.921565  144622 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:52:49.922838  144622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:52:49.924592  144622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-104457/kubeconfig
	I1018 09:52:49.925750  144622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-104457/.minikube
	I1018 09:52:49.929686  144622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:52:49.930935  144622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:52:49.932800  144622 config.go:182] Loaded profile config "cert-expiration-464564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:52:49.932983  144622 config.go:182] Loaded profile config "kubernetes-upgrade-689545": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:52:49.933109  144622 config.go:182] Loaded profile config "stopped-upgrade-461592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:52:49.933233  144622 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:52:49.972294  144622 out.go:179] * Using the kvm2 driver based on user configuration
	I1018 09:52:49.973707  144622 start.go:305] selected driver: kvm2
	I1018 09:52:49.973726  144622 start.go:925] validating driver "kvm2" against <nil>
	I1018 09:52:49.973748  144622 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:52:49.975806  144622 out.go:203] 
	W1018 09:52:49.976986  144622 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 09:52:49.978585  144622 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-882442 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.128:8443
name: cert-expiration-464564
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.140:8443
name: kubernetes-upgrade-689545
contexts:
- context:
cluster: cert-expiration-464564
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-464564
name: cert-expiration-464564
- context:
cluster: kubernetes-upgrade-689545
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-689545
name: kubernetes-upgrade-689545
current-context: kubernetes-upgrade-689545
kind: Config
users:
- name: cert-expiration-464564
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.key
- name: kubernetes-upgrade-689545
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-882442

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882442"

                                                
                                                
----------------------- debugLogs end: false-882442 [took: 3.489395464s] --------------------------------
helpers_test.go:175: Cleaning up "false-882442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-882442
--- PASS: TestNetworkPlugins/group/false (3.76s)

                                                
                                    
x
+
TestPause/serial/Start (76.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-551330 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-551330 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.02548362s)
--- PASS: TestPause/serial/Start (76.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-461592
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-461592: (1.186545927s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m3.56419836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m25.171869863s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m34.040309887s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-066041 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aad15c7f-f709-40be-86e8-12317d509fd8] Pending
helpers_test.go:352: "busybox" [aad15c7f-f709-40be-86e8-12317d509fd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aad15c7f-f709-40be-86e8-12317d509fd8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003486639s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-066041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-066041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-066041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.623712342s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-066041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (75.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-066041 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-066041 --alsologtostderr -v=3: (1m15.43555045s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (75.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-354737 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-354737 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.712229358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-231061 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1663619c-79cc-49a5-ac00-796d50ee74dd] Pending
helpers_test.go:352: "busybox" [1663619c-79cc-49a5-ac00-796d50ee74dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1663619c-79cc-49a5-ac00-796d50ee74dd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004906135s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-231061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-231061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-231061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-231061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-231061 --alsologtostderr -v=3: (1m28.948910281s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-512028 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bbfd39b3-0c40-4f0a-9ef5-c708415fb792] Pending
helpers_test.go:352: "busybox" [bbfd39b3-0c40-4f0a-9ef5-c708415fb792] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bbfd39b3-0c40-4f0a-9ef5-c708415fb792] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004280648s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-512028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-512028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-512028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-512028 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-512028 --alsologtostderr -v=3: (1m26.296958911s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-354737 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2573506d-fd23-4994-8698-be7b9248ff96] Pending
helpers_test.go:352: "busybox" [2573506d-fd23-4994-8698-be7b9248ff96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2573506d-fd23-4994-8698-be7b9248ff96] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005251212s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-354737 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-066041 -n old-k8s-version-066041
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-066041 -n old-k8s-version-066041: exit status 7 (77.711843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-066041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-066041 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (43.765601268s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-066041 -n old-k8s-version-066041
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-354737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-354737 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-354737 --alsologtostderr -v=3
E1018 09:57:16.304332  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-354737 --alsologtostderr -v=3: (1m22.091879885s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9ctmc" [649bf231-3cfe-47af-94c5-a3d391ab6673] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9ctmc" [649bf231-3cfe-47af-94c5-a3d391ab6673] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004156727s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-231061 -n no-preload-231061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-231061 -n no-preload-231061: exit status 7 (66.107711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-231061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-231061 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (58.662992993s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-231061 -n no-preload-231061
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9ctmc" [649bf231-3cfe-47af-94c5-a3d391ab6673] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004414302s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-066041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-066041 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-066041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-066041 -n old-k8s-version-066041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-066041 -n old-k8s-version-066041: exit status 2 (277.855623ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-066041 -n old-k8s-version-066041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-066041 -n old-k8s-version-066041: exit status 2 (265.542375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-066041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-066041 -n old-k8s-version-066041
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-066041 -n old-k8s-version-066041
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512028 -n embed-certs-512028
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512028 -n embed-certs-512028: exit status 7 (90.211772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512028 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.165998482s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512028 -n embed-certs-512028
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-225568 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-225568 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m15.813517934s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737: exit status 7 (80.982713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-354737 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (74.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-354737 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-354737 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m14.586426487s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (74.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9pnw" [6e488bd5-b349-4525-bff1-4268ce339f70] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9pnw" [6e488bd5-b349-4525-bff1-4268ce339f70] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006120675s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pqbsg" [12a786c1-51d1-4b12-bebe-5dd1978e450c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004180504s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9pnw" [6e488bd5-b349-4525-bff1-4268ce339f70] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005997133s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-231061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pqbsg" [12a786c1-51d1-4b12-bebe-5dd1978e450c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006222006s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-512028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-231061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-512028 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-231061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-231061 --alsologtostderr -v=1: (1.223130668s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-231061 -n no-preload-231061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-231061 -n no-preload-231061: exit status 2 (378.299593ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-231061 -n no-preload-231061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-231061 -n no-preload-231061: exit status 2 (424.113332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-231061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-231061 --alsologtostderr -v=1: (1.029473225s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-231061 -n no-preload-231061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-231061 -n no-preload-231061
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-512028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-512028 --alsologtostderr -v=1: (1.101319104s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512028 -n embed-certs-512028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512028 -n embed-certs-512028: exit status 2 (388.312631ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-512028 -n embed-certs-512028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-512028 -n embed-certs-512028: exit status 2 (380.63689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-512028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-512028 --alsologtostderr -v=1: (1.126066101s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512028 -n embed-certs-512028
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-512028 -n embed-certs-512028
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.33934306s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.435811144s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-225568 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-225568 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.430585803s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-225568 --alsologtostderr -v=3
E1018 09:59:17.229083  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/addons-281483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-225568 --alsologtostderr -v=3: (11.158320193s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-225568 -n newest-cni-225568
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-225568 -n newest-cni-225568: exit status 7 (79.063285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-225568 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (66.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-225568 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-225568 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m6.022814484s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-225568 -n newest-cni-225568
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (66.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6jjwr" [4ac4cb5f-3366-415c-86a6-bdbd657119b0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004651406s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6jjwr" [4ac4cb5f-3366-415c-86a6-bdbd657119b0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005185687s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-354737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-354737 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-354737 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-354737 --alsologtostderr -v=1: (1.017113345s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737: exit status 2 (288.503813ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737: exit status 2 (344.755669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-354737 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-354737 -n default-k8s-diff-port-354737
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.257859473s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-882442 "pgrep -a kubelet"
I1018 10:00:04.074992  108373 config.go:182] Loaded profile config "auto-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-882442 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jprfk" [234b019d-34ce-4801-b75b-24f1d2562179] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jprfk" [234b019d-34ce-4801-b75b-24f1d2562179] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006954231s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1018 10:00:15.607948  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:00:15.614431  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:00:15.625909  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:00:15.647526  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:00:15.690207  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:00:15.771909  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1018 10:00:15.934016  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-225568 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-225568 --alsologtostderr -v=1
E1018 10:00:25.864093  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-225568 -n newest-cni-225568
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-225568 -n newest-cni-225568: exit status 2 (366.802394ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-225568 -n newest-cni-225568
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-225568 -n newest-cni-225568: exit status 2 (352.964392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-225568 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-225568 -n newest-cni-225568
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-225568 -n newest-cni-225568
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4l7kj" [7b383c6f-a89d-4125-9e37-3717ba22f782] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00516196s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.277168613s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.859821298s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-882442 "pgrep -a kubelet"
I1018 10:00:35.988642  108373 config.go:182] Loaded profile config "kindnet-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-882442 replace --force -f testdata/netcat-deployment.yaml
E1018 10:00:36.106330  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vbmq8" [629e47ff-96a8-4503-b323-87f2efa1a7d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vbmq8" [629e47ff-96a8-4503-b323-87f2efa1a7d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004651605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1018 10:01:08.663416  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.798223355s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mmvpp" [9273016f-7158-403e-8286-4e3c8ff38715] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1018 10:01:18.905672  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.115534539s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-882442 "pgrep -a kubelet"
I1018 10:01:23.938170  108373 config.go:182] Loaded profile config "calico-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (28.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-882442 replace --force -f testdata/netcat-deployment.yaml
I1018 10:01:24.717868  108373 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1018 10:01:24.729946  108373 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rcvff" [af28a01f-a7ae-4d9d-aa00-d4fd91b635ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 10:01:37.550209  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/old-k8s-version-066041/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:37.860247  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:37.866843  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:37.878928  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:37.901233  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:37.943337  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:38.024689  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:38.186303  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:38.508480  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:39.150103  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:39.387742  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/no-preload-231061/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:40.431461  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:42.993071  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rcvff" [af28a01f-a7ae-4d9d-aa00-d4fd91b635ba] Running
E1018 10:01:48.115170  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 28.004187652s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (28.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-882442 "pgrep -a kubelet"
I1018 10:01:49.510128  108373 config.go:182] Loaded profile config "custom-flannel-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (20.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-882442 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ln9bj" [4a8cdf2b-773a-43b4-9d6e-ef8ef7f163da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ln9bj" [4a8cdf2b-773a-43b4-9d6e-ef8ef7f163da] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 20.005133134s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (20.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-882442 "pgrep -a kubelet"
I1018 10:01:56.776655  108373 config.go:182] Loaded profile config "enable-default-cni-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-882442 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qvrpr" [e7070d08-ee79-4a37-8233-9bfb72b7cfb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 10:01:58.357729  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/default-k8s-diff-port-354737/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:59.372123  108373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/functional-361078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qvrpr" [e7070d08-ee79-4a37-8233-9bfb72b7cfb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 17.004186458s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-882442 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.020338861s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tds88" [a37fbaa9-b877-445a-b688-6c2d18ab86b8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004021306s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-882442 "pgrep -a kubelet"
I1018 10:02:38.539317  108373 config.go:182] Loaded profile config "flannel-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-882442 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ktk5p" [65bf336c-363b-482d-a20f-50a53434ed4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ktk5p" [65bf336c-363b-482d-a20f-50a53434ed4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004547406s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-882442 "pgrep -a kubelet"
I1018 10:03:10.054967  108373 config.go:182] Loaded profile config "bridge-882442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-882442 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4b4f2" [001df9f9-ebf3-46f6-baec-e082fdd64678] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4b4f2" [001df9f9-ebf3-46f6-baec-e082fdd64678] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.0045927s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-882442 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-882442 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
271 TestStartStop/group/disable-driver-mounts 0.18
275 TestNetworkPlugins/group/kubenet 3.41
283 TestNetworkPlugins/group/cilium 3.73
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-281483 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-474552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-474552
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-882442 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.128:8443
name: cert-expiration-464564
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.140:8443
name: kubernetes-upgrade-689545
contexts:
- context:
cluster: cert-expiration-464564
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-464564
name: cert-expiration-464564
- context:
cluster: kubernetes-upgrade-689545
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-689545
name: kubernetes-upgrade-689545
current-context: kubernetes-upgrade-689545
kind: Config
users:
- name: cert-expiration-464564
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.key
- name: kubernetes-upgrade-689545
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-882442

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882442"

                                                
                                                
----------------------- debugLogs end: kubenet-882442 [took: 3.238347183s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-882442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-882442
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-882442 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-882442" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.128:8443
name: cert-expiration-464564
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-104457/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.140:8443
name: kubernetes-upgrade-689545
contexts:
- context:
cluster: cert-expiration-464564
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:50:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-464564
name: cert-expiration-464564
- context:
cluster: kubernetes-upgrade-689545
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:52:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-689545
name: kubernetes-upgrade-689545
current-context: kubernetes-upgrade-689545
kind: Config
users:
- name: cert-expiration-464564
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/cert-expiration-464564/client.key
- name: kubernetes-upgrade-689545
user:
client-certificate: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.crt
client-key: /home/jenkins/minikube-integration/21764-104457/.minikube/profiles/kubernetes-upgrade-689545/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-882442

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-882442" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882442"

                                                
                                                
----------------------- debugLogs end: cilium-882442 [took: 3.580422561s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-882442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-882442
--- SKIP: TestNetworkPlugins/group/cilium (3.73s)

                                                
                                    
Copied to clipboard