Test Report: KVM_Linux_crio 21139

                    
                      acfd8b7155af18aff79ff1a575a474dfb6fd930f:2025-10-09:41835
                    
                

Test fail (5/325)

Order failed test Duration
37 TestAddons/parallel/Ingress 165.34
131 TestFunctional/parallel/ImageCommands/ImageRemove 3.38
244 TestPreload 162.87
281 TestPause/serial/SecondStartNoReconfiguration 74.4
283 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 2.71
x
+
TestAddons/parallel/Ingress (165.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-676842 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-676842 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-676842 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [42803e48-12dc-491c-8f14-f4a8f6b9b681] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [42803e48-12dc-491c-8f14-f4a8f6b9b681] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.005162637s
I1009 18:01:34.144606   15263 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-676842 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.012279072s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-676842 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.66
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-676842 -n addons-676842
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 logs -n 25: (1.470522888s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-712312                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-712312 │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │ 09 Oct 25 17:57 UTC │
	│ start   │ --download-only -p binary-mirror-168507 --alsologtostderr --binary-mirror http://127.0.0.1:35065 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-168507 │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │                     │
	│ delete  │ -p binary-mirror-168507                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-168507 │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │ 09 Oct 25 17:57 UTC │
	│ addons  │ disable dashboard -p addons-676842                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │                     │
	│ addons  │ enable dashboard -p addons-676842                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │                     │
	│ start   │ -p addons-676842 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 17:57 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ addons-676842 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:00 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ addons-676842 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:00 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ enable headlamp -p addons-676842 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:00 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ addons-676842 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:00 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ addons-676842 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:00 UTC │ 09 Oct 25 18:00 UTC │
	│ addons  │ addons-676842 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ ip      │ addons-676842 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ ssh     │ addons-676842 ssh cat /opt/local-path-provisioner/pvc-0a963da0-6088-440e-83a8-98817e7b62a4_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-676842                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ ssh     │ addons-676842 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │                     │
	│ addons  │ addons-676842 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:01 UTC │
	│ addons  │ addons-676842 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:01 UTC │ 09 Oct 25 18:02 UTC │
	│ ip      │ addons-676842 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-676842        │ jenkins │ v1.37.0 │ 09 Oct 25 18:03 UTC │ 09 Oct 25 18:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:57:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:57:06.895133   15980 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:57:06.895226   15980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:57:06.895233   15980 out.go:374] Setting ErrFile to fd 2...
	I1009 17:57:06.895238   15980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:57:06.895493   15980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 17:57:06.896141   15980 out.go:368] Setting JSON to false
	I1009 17:57:06.896974   15980 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2367,"bootTime":1760030260,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:57:06.897071   15980 start.go:141] virtualization: kvm guest
	I1009 17:57:06.898835   15980 out.go:179] * [addons-676842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:57:06.900096   15980 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 17:57:06.900099   15980 notify.go:220] Checking for updates...
	I1009 17:57:06.903032   15980 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:57:06.904150   15980 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 17:57:06.905316   15980 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 17:57:06.906446   15980 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 17:57:06.907588   15980 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 17:57:06.909009   15980 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:57:06.940699   15980 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 17:57:06.942174   15980 start.go:305] selected driver: kvm2
	I1009 17:57:06.942190   15980 start.go:925] validating driver "kvm2" against <nil>
	I1009 17:57:06.942201   15980 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 17:57:06.942884   15980 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:57:06.942967   15980 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:57:06.956863   15980 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:57:06.956892   15980 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:57:06.970455   15980 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:57:06.970497   15980 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:57:06.970744   15980 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 17:57:06.970768   15980 cni.go:84] Creating CNI manager for ""
	I1009 17:57:06.970809   15980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 17:57:06.970819   15980 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 17:57:06.970858   15980 start.go:349] cluster config:
	{Name:addons-676842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-676842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1009 17:57:06.970957   15980 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:57:06.972932   15980 out.go:179] * Starting "addons-676842" primary control-plane node in "addons-676842" cluster
	I1009 17:57:06.974229   15980 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:57:06.974260   15980 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 17:57:06.974269   15980 cache.go:64] Caching tarball of preloaded images
	I1009 17:57:06.974358   15980 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 17:57:06.974368   15980 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 17:57:06.974667   15980 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/config.json ...
	I1009 17:57:06.974689   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/config.json: {Name:mk27b0e4f4de9900f3960afa1236f064b3956883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:06.974836   15980 start.go:360] acquireMachinesLock for addons-676842: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 17:57:06.974883   15980 start.go:364] duration metric: took 34.58µs to acquireMachinesLock for "addons-676842"
	I1009 17:57:06.974901   15980 start.go:93] Provisioning new machine with config: &{Name:addons-676842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-676842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 17:57:06.974952   15980 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 17:57:06.976745   15980 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1009 17:57:06.976861   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:06.976903   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:06.989926   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I1009 17:57:06.990411   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:06.990874   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:06.990892   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:06.991236   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:06.991443   15980 main.go:141] libmachine: (addons-676842) Calling .GetMachineName
	I1009 17:57:06.991614   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:06.991751   15980 start.go:159] libmachine.API.Create for "addons-676842" (driver="kvm2")
	I1009 17:57:06.991784   15980 client.go:168] LocalClient.Create starting
	I1009 17:57:06.991833   15980 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem
	I1009 17:57:07.417022   15980 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem
	I1009 17:57:07.640641   15980 main.go:141] libmachine: Running pre-create checks...
	I1009 17:57:07.640666   15980 main.go:141] libmachine: (addons-676842) Calling .PreCreateCheck
	I1009 17:57:07.641192   15980 main.go:141] libmachine: (addons-676842) Calling .GetConfigRaw
	I1009 17:57:07.641631   15980 main.go:141] libmachine: Creating machine...
	I1009 17:57:07.641647   15980 main.go:141] libmachine: (addons-676842) Calling .Create
	I1009 17:57:07.641788   15980 main.go:141] libmachine: (addons-676842) creating domain...
	I1009 17:57:07.641800   15980 main.go:141] libmachine: (addons-676842) creating network...
	I1009 17:57:07.643734   15980 main.go:141] libmachine: (addons-676842) DBG | found existing default network
	I1009 17:57:07.643916   15980 main.go:141] libmachine: (addons-676842) DBG | <network>
	I1009 17:57:07.643938   15980 main.go:141] libmachine: (addons-676842) DBG |   <name>default</name>
	I1009 17:57:07.643951   15980 main.go:141] libmachine: (addons-676842) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 17:57:07.643961   15980 main.go:141] libmachine: (addons-676842) DBG |   <forward mode='nat'>
	I1009 17:57:07.643971   15980 main.go:141] libmachine: (addons-676842) DBG |     <nat>
	I1009 17:57:07.643980   15980 main.go:141] libmachine: (addons-676842) DBG |       <port start='1024' end='65535'/>
	I1009 17:57:07.643991   15980 main.go:141] libmachine: (addons-676842) DBG |     </nat>
	I1009 17:57:07.643999   15980 main.go:141] libmachine: (addons-676842) DBG |   </forward>
	I1009 17:57:07.644010   15980 main.go:141] libmachine: (addons-676842) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 17:57:07.644019   15980 main.go:141] libmachine: (addons-676842) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 17:57:07.644030   15980 main.go:141] libmachine: (addons-676842) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 17:57:07.644052   15980 main.go:141] libmachine: (addons-676842) DBG |     <dhcp>
	I1009 17:57:07.644066   15980 main.go:141] libmachine: (addons-676842) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 17:57:07.644093   15980 main.go:141] libmachine: (addons-676842) DBG |     </dhcp>
	I1009 17:57:07.644104   15980 main.go:141] libmachine: (addons-676842) DBG |   </ip>
	I1009 17:57:07.644116   15980 main.go:141] libmachine: (addons-676842) DBG | </network>
	I1009 17:57:07.644129   15980 main.go:141] libmachine: (addons-676842) DBG | 
	I1009 17:57:07.644631   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:07.644477   16008 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1009 17:57:07.644684   15980 main.go:141] libmachine: (addons-676842) DBG | defining private network:
	I1009 17:57:07.644710   15980 main.go:141] libmachine: (addons-676842) DBG | 
	I1009 17:57:07.644721   15980 main.go:141] libmachine: (addons-676842) DBG | <network>
	I1009 17:57:07.644737   15980 main.go:141] libmachine: (addons-676842) DBG |   <name>mk-addons-676842</name>
	I1009 17:57:07.644746   15980 main.go:141] libmachine: (addons-676842) DBG |   <dns enable='no'/>
	I1009 17:57:07.644760   15980 main.go:141] libmachine: (addons-676842) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 17:57:07.644783   15980 main.go:141] libmachine: (addons-676842) DBG |     <dhcp>
	I1009 17:57:07.644794   15980 main.go:141] libmachine: (addons-676842) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 17:57:07.644802   15980 main.go:141] libmachine: (addons-676842) DBG |     </dhcp>
	I1009 17:57:07.644808   15980 main.go:141] libmachine: (addons-676842) DBG |   </ip>
	I1009 17:57:07.644817   15980 main.go:141] libmachine: (addons-676842) DBG | </network>
	I1009 17:57:07.644827   15980 main.go:141] libmachine: (addons-676842) DBG | 
	I1009 17:57:07.650641   15980 main.go:141] libmachine: (addons-676842) DBG | creating private network mk-addons-676842 192.168.39.0/24...
	I1009 17:57:07.718400   15980 main.go:141] libmachine: (addons-676842) DBG | private network mk-addons-676842 192.168.39.0/24 created
	I1009 17:57:07.718673   15980 main.go:141] libmachine: (addons-676842) DBG | <network>
	I1009 17:57:07.718696   15980 main.go:141] libmachine: (addons-676842) setting up store path in /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842 ...
	I1009 17:57:07.718704   15980 main.go:141] libmachine: (addons-676842) DBG |   <name>mk-addons-676842</name>
	I1009 17:57:07.718712   15980 main.go:141] libmachine: (addons-676842) DBG |   <uuid>3002a8b8-5dec-4b37-bde3-29c8885ef3af</uuid>
	I1009 17:57:07.718717   15980 main.go:141] libmachine: (addons-676842) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1009 17:57:07.718723   15980 main.go:141] libmachine: (addons-676842) DBG |   <mac address='52:54:00:b0:0f:61'/>
	I1009 17:57:07.718728   15980 main.go:141] libmachine: (addons-676842) DBG |   <dns enable='no'/>
	I1009 17:57:07.718738   15980 main.go:141] libmachine: (addons-676842) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 17:57:07.718747   15980 main.go:141] libmachine: (addons-676842) DBG |     <dhcp>
	I1009 17:57:07.718757   15980 main.go:141] libmachine: (addons-676842) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 17:57:07.718767   15980 main.go:141] libmachine: (addons-676842) DBG |     </dhcp>
	I1009 17:57:07.718775   15980 main.go:141] libmachine: (addons-676842) DBG |   </ip>
	I1009 17:57:07.718783   15980 main.go:141] libmachine: (addons-676842) DBG | </network>
	I1009 17:57:07.718812   15980 main.go:141] libmachine: (addons-676842) building disk image from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 17:57:07.718835   15980 main.go:141] libmachine: (addons-676842) Downloading /home/jenkins/minikube-integration/21139-11352/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 17:57:07.718844   15980 main.go:141] libmachine: (addons-676842) DBG | 
	I1009 17:57:07.718865   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:07.718666   16008 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 17:57:08.004823   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:08.004626   16008 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa...
	I1009 17:57:08.124493   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:08.124323   16008 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/addons-676842.rawdisk...
	I1009 17:57:08.124519   15980 main.go:141] libmachine: (addons-676842) DBG | Writing magic tar header
	I1009 17:57:08.124533   15980 main.go:141] libmachine: (addons-676842) DBG | Writing SSH key tar header
	I1009 17:57:08.124541   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:08.124437   16008 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842 ...
	I1009 17:57:08.124575   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842
	I1009 17:57:08.124582   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines
	I1009 17:57:08.124589   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 17:57:08.124602   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352
	I1009 17:57:08.124613   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842 (perms=drwx------)
	I1009 17:57:08.124630   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines (perms=drwxr-xr-x)
	I1009 17:57:08.124641   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube (perms=drwxr-xr-x)
	I1009 17:57:08.124651   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins/minikube-integration/21139-11352 (perms=drwxrwxr-x)
	I1009 17:57:08.124664   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 17:57:08.124673   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 17:57:08.124682   15980 main.go:141] libmachine: (addons-676842) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 17:57:08.124690   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home/jenkins
	I1009 17:57:08.124700   15980 main.go:141] libmachine: (addons-676842) defining domain...
	I1009 17:57:08.124721   15980 main.go:141] libmachine: (addons-676842) DBG | checking permissions on dir: /home
	I1009 17:57:08.124746   15980 main.go:141] libmachine: (addons-676842) DBG | skipping /home - not owner
	I1009 17:57:08.125951   15980 main.go:141] libmachine: (addons-676842) defining domain using XML: 
	I1009 17:57:08.125974   15980 main.go:141] libmachine: (addons-676842) <domain type='kvm'>
	I1009 17:57:08.125984   15980 main.go:141] libmachine: (addons-676842)   <name>addons-676842</name>
	I1009 17:57:08.125996   15980 main.go:141] libmachine: (addons-676842)   <memory unit='MiB'>4096</memory>
	I1009 17:57:08.126005   15980 main.go:141] libmachine: (addons-676842)   <vcpu>2</vcpu>
	I1009 17:57:08.126011   15980 main.go:141] libmachine: (addons-676842)   <features>
	I1009 17:57:08.126019   15980 main.go:141] libmachine: (addons-676842)     <acpi/>
	I1009 17:57:08.126027   15980 main.go:141] libmachine: (addons-676842)     <apic/>
	I1009 17:57:08.126054   15980 main.go:141] libmachine: (addons-676842)     <pae/>
	I1009 17:57:08.126068   15980 main.go:141] libmachine: (addons-676842)   </features>
	I1009 17:57:08.126087   15980 main.go:141] libmachine: (addons-676842)   <cpu mode='host-passthrough'>
	I1009 17:57:08.126111   15980 main.go:141] libmachine: (addons-676842)   </cpu>
	I1009 17:57:08.126117   15980 main.go:141] libmachine: (addons-676842)   <os>
	I1009 17:57:08.126129   15980 main.go:141] libmachine: (addons-676842)     <type>hvm</type>
	I1009 17:57:08.126137   15980 main.go:141] libmachine: (addons-676842)     <boot dev='cdrom'/>
	I1009 17:57:08.126141   15980 main.go:141] libmachine: (addons-676842)     <boot dev='hd'/>
	I1009 17:57:08.126146   15980 main.go:141] libmachine: (addons-676842)     <bootmenu enable='no'/>
	I1009 17:57:08.126151   15980 main.go:141] libmachine: (addons-676842)   </os>
	I1009 17:57:08.126156   15980 main.go:141] libmachine: (addons-676842)   <devices>
	I1009 17:57:08.126169   15980 main.go:141] libmachine: (addons-676842)     <disk type='file' device='cdrom'>
	I1009 17:57:08.126199   15980 main.go:141] libmachine: (addons-676842)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/boot2docker.iso'/>
	I1009 17:57:08.126218   15980 main.go:141] libmachine: (addons-676842)       <target dev='hdc' bus='scsi'/>
	I1009 17:57:08.126225   15980 main.go:141] libmachine: (addons-676842)       <readonly/>
	I1009 17:57:08.126232   15980 main.go:141] libmachine: (addons-676842)     </disk>
	I1009 17:57:08.126239   15980 main.go:141] libmachine: (addons-676842)     <disk type='file' device='disk'>
	I1009 17:57:08.126246   15980 main.go:141] libmachine: (addons-676842)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 17:57:08.126254   15980 main.go:141] libmachine: (addons-676842)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/addons-676842.rawdisk'/>
	I1009 17:57:08.126261   15980 main.go:141] libmachine: (addons-676842)       <target dev='hda' bus='virtio'/>
	I1009 17:57:08.126265   15980 main.go:141] libmachine: (addons-676842)     </disk>
	I1009 17:57:08.126272   15980 main.go:141] libmachine: (addons-676842)     <interface type='network'>
	I1009 17:57:08.126289   15980 main.go:141] libmachine: (addons-676842)       <source network='mk-addons-676842'/>
	I1009 17:57:08.126299   15980 main.go:141] libmachine: (addons-676842)       <model type='virtio'/>
	I1009 17:57:08.126313   15980 main.go:141] libmachine: (addons-676842)     </interface>
	I1009 17:57:08.126330   15980 main.go:141] libmachine: (addons-676842)     <interface type='network'>
	I1009 17:57:08.126343   15980 main.go:141] libmachine: (addons-676842)       <source network='default'/>
	I1009 17:57:08.126353   15980 main.go:141] libmachine: (addons-676842)       <model type='virtio'/>
	I1009 17:57:08.126364   15980 main.go:141] libmachine: (addons-676842)     </interface>
	I1009 17:57:08.126374   15980 main.go:141] libmachine: (addons-676842)     <serial type='pty'>
	I1009 17:57:08.126384   15980 main.go:141] libmachine: (addons-676842)       <target port='0'/>
	I1009 17:57:08.126393   15980 main.go:141] libmachine: (addons-676842)     </serial>
	I1009 17:57:08.126412   15980 main.go:141] libmachine: (addons-676842)     <console type='pty'>
	I1009 17:57:08.126428   15980 main.go:141] libmachine: (addons-676842)       <target type='serial' port='0'/>
	I1009 17:57:08.126438   15980 main.go:141] libmachine: (addons-676842)     </console>
	I1009 17:57:08.126444   15980 main.go:141] libmachine: (addons-676842)     <rng model='virtio'>
	I1009 17:57:08.126450   15980 main.go:141] libmachine: (addons-676842)       <backend model='random'>/dev/random</backend>
	I1009 17:57:08.126456   15980 main.go:141] libmachine: (addons-676842)     </rng>
	I1009 17:57:08.126461   15980 main.go:141] libmachine: (addons-676842)   </devices>
	I1009 17:57:08.126467   15980 main.go:141] libmachine: (addons-676842) </domain>
	I1009 17:57:08.126474   15980 main.go:141] libmachine: (addons-676842) 
	I1009 17:57:08.133268   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:09:36:e8 in network default
	I1009 17:57:08.133810   15980 main.go:141] libmachine: (addons-676842) starting domain...
	I1009 17:57:08.133836   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:08.133855   15980 main.go:141] libmachine: (addons-676842) ensuring networks are active...
	I1009 17:57:08.134574   15980 main.go:141] libmachine: (addons-676842) Ensuring network default is active
	I1009 17:57:08.134898   15980 main.go:141] libmachine: (addons-676842) Ensuring network mk-addons-676842 is active
	I1009 17:57:08.135503   15980 main.go:141] libmachine: (addons-676842) getting domain XML...
	I1009 17:57:08.136597   15980 main.go:141] libmachine: (addons-676842) DBG | starting domain XML:
	I1009 17:57:08.136618   15980 main.go:141] libmachine: (addons-676842) DBG | <domain type='kvm'>
	I1009 17:57:08.136629   15980 main.go:141] libmachine: (addons-676842) DBG |   <name>addons-676842</name>
	I1009 17:57:08.136638   15980 main.go:141] libmachine: (addons-676842) DBG |   <uuid>022d6156-cc4a-4ea3-bc86-10dd75a25dbb</uuid>
	I1009 17:57:08.136653   15980 main.go:141] libmachine: (addons-676842) DBG |   <memory unit='KiB'>4194304</memory>
	I1009 17:57:08.136662   15980 main.go:141] libmachine: (addons-676842) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1009 17:57:08.136671   15980 main.go:141] libmachine: (addons-676842) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 17:57:08.136680   15980 main.go:141] libmachine: (addons-676842) DBG |   <os>
	I1009 17:57:08.136705   15980 main.go:141] libmachine: (addons-676842) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 17:57:08.136721   15980 main.go:141] libmachine: (addons-676842) DBG |     <boot dev='cdrom'/>
	I1009 17:57:08.136727   15980 main.go:141] libmachine: (addons-676842) DBG |     <boot dev='hd'/>
	I1009 17:57:08.136735   15980 main.go:141] libmachine: (addons-676842) DBG |     <bootmenu enable='no'/>
	I1009 17:57:08.136743   15980 main.go:141] libmachine: (addons-676842) DBG |   </os>
	I1009 17:57:08.136751   15980 main.go:141] libmachine: (addons-676842) DBG |   <features>
	I1009 17:57:08.136756   15980 main.go:141] libmachine: (addons-676842) DBG |     <acpi/>
	I1009 17:57:08.136765   15980 main.go:141] libmachine: (addons-676842) DBG |     <apic/>
	I1009 17:57:08.136770   15980 main.go:141] libmachine: (addons-676842) DBG |     <pae/>
	I1009 17:57:08.136778   15980 main.go:141] libmachine: (addons-676842) DBG |   </features>
	I1009 17:57:08.136787   15980 main.go:141] libmachine: (addons-676842) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 17:57:08.136806   15980 main.go:141] libmachine: (addons-676842) DBG |   <clock offset='utc'/>
	I1009 17:57:08.136814   15980 main.go:141] libmachine: (addons-676842) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 17:57:08.136822   15980 main.go:141] libmachine: (addons-676842) DBG |   <on_reboot>restart</on_reboot>
	I1009 17:57:08.136852   15980 main.go:141] libmachine: (addons-676842) DBG |   <on_crash>destroy</on_crash>
	I1009 17:57:08.136879   15980 main.go:141] libmachine: (addons-676842) DBG |   <devices>
	I1009 17:57:08.136894   15980 main.go:141] libmachine: (addons-676842) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 17:57:08.136901   15980 main.go:141] libmachine: (addons-676842) DBG |     <disk type='file' device='cdrom'>
	I1009 17:57:08.136912   15980 main.go:141] libmachine: (addons-676842) DBG |       <driver name='qemu' type='raw'/>
	I1009 17:57:08.136919   15980 main.go:141] libmachine: (addons-676842) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/boot2docker.iso'/>
	I1009 17:57:08.136927   15980 main.go:141] libmachine: (addons-676842) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 17:57:08.136934   15980 main.go:141] libmachine: (addons-676842) DBG |       <readonly/>
	I1009 17:57:08.136948   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 17:57:08.136962   15980 main.go:141] libmachine: (addons-676842) DBG |     </disk>
	I1009 17:57:08.136981   15980 main.go:141] libmachine: (addons-676842) DBG |     <disk type='file' device='disk'>
	I1009 17:57:08.136996   15980 main.go:141] libmachine: (addons-676842) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 17:57:08.137024   15980 main.go:141] libmachine: (addons-676842) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/addons-676842.rawdisk'/>
	I1009 17:57:08.137050   15980 main.go:141] libmachine: (addons-676842) DBG |       <target dev='hda' bus='virtio'/>
	I1009 17:57:08.137067   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 17:57:08.137078   15980 main.go:141] libmachine: (addons-676842) DBG |     </disk>
	I1009 17:57:08.137089   15980 main.go:141] libmachine: (addons-676842) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 17:57:08.137102   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 17:57:08.137118   15980 main.go:141] libmachine: (addons-676842) DBG |     </controller>
	I1009 17:57:08.137132   15980 main.go:141] libmachine: (addons-676842) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 17:57:08.137145   15980 main.go:141] libmachine: (addons-676842) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 17:57:08.137159   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 17:57:08.137168   15980 main.go:141] libmachine: (addons-676842) DBG |     </controller>
	I1009 17:57:08.137192   15980 main.go:141] libmachine: (addons-676842) DBG |     <interface type='network'>
	I1009 17:57:08.137203   15980 main.go:141] libmachine: (addons-676842) DBG |       <mac address='52:54:00:7c:95:ff'/>
	I1009 17:57:08.137218   15980 main.go:141] libmachine: (addons-676842) DBG |       <source network='mk-addons-676842'/>
	I1009 17:57:08.137252   15980 main.go:141] libmachine: (addons-676842) DBG |       <model type='virtio'/>
	I1009 17:57:08.137264   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 17:57:08.137274   15980 main.go:141] libmachine: (addons-676842) DBG |     </interface>
	I1009 17:57:08.137284   15980 main.go:141] libmachine: (addons-676842) DBG |     <interface type='network'>
	I1009 17:57:08.137295   15980 main.go:141] libmachine: (addons-676842) DBG |       <mac address='52:54:00:09:36:e8'/>
	I1009 17:57:08.137306   15980 main.go:141] libmachine: (addons-676842) DBG |       <source network='default'/>
	I1009 17:57:08.137315   15980 main.go:141] libmachine: (addons-676842) DBG |       <model type='virtio'/>
	I1009 17:57:08.137330   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 17:57:08.137346   15980 main.go:141] libmachine: (addons-676842) DBG |     </interface>
	I1009 17:57:08.137363   15980 main.go:141] libmachine: (addons-676842) DBG |     <serial type='pty'>
	I1009 17:57:08.137380   15980 main.go:141] libmachine: (addons-676842) DBG |       <target type='isa-serial' port='0'>
	I1009 17:57:08.137391   15980 main.go:141] libmachine: (addons-676842) DBG |         <model name='isa-serial'/>
	I1009 17:57:08.137401   15980 main.go:141] libmachine: (addons-676842) DBG |       </target>
	I1009 17:57:08.137411   15980 main.go:141] libmachine: (addons-676842) DBG |     </serial>
	I1009 17:57:08.137422   15980 main.go:141] libmachine: (addons-676842) DBG |     <console type='pty'>
	I1009 17:57:08.137437   15980 main.go:141] libmachine: (addons-676842) DBG |       <target type='serial' port='0'/>
	I1009 17:57:08.137449   15980 main.go:141] libmachine: (addons-676842) DBG |     </console>
	I1009 17:57:08.137460   15980 main.go:141] libmachine: (addons-676842) DBG |     <input type='mouse' bus='ps2'/>
	I1009 17:57:08.137470   15980 main.go:141] libmachine: (addons-676842) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 17:57:08.137481   15980 main.go:141] libmachine: (addons-676842) DBG |     <audio id='1' type='none'/>
	I1009 17:57:08.137495   15980 main.go:141] libmachine: (addons-676842) DBG |     <memballoon model='virtio'>
	I1009 17:57:08.137512   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 17:57:08.137524   15980 main.go:141] libmachine: (addons-676842) DBG |     </memballoon>
	I1009 17:57:08.137535   15980 main.go:141] libmachine: (addons-676842) DBG |     <rng model='virtio'>
	I1009 17:57:08.137569   15980 main.go:141] libmachine: (addons-676842) DBG |       <backend model='random'>/dev/random</backend>
	I1009 17:57:08.137587   15980 main.go:141] libmachine: (addons-676842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 17:57:08.137597   15980 main.go:141] libmachine: (addons-676842) DBG |     </rng>
	I1009 17:57:08.137607   15980 main.go:141] libmachine: (addons-676842) DBG |   </devices>
	I1009 17:57:08.137616   15980 main.go:141] libmachine: (addons-676842) DBG | </domain>
	I1009 17:57:08.137626   15980 main.go:141] libmachine: (addons-676842) DBG | 
	I1009 17:57:09.451426   15980 main.go:141] libmachine: (addons-676842) waiting for domain to start...
	I1009 17:57:09.453007   15980 main.go:141] libmachine: (addons-676842) domain is now running
	I1009 17:57:09.453032   15980 main.go:141] libmachine: (addons-676842) waiting for IP...
	I1009 17:57:09.453868   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:09.454384   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:09.454409   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:09.454734   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:09.454814   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:09.454750   16008 retry.go:31] will retry after 207.956877ms: waiting for domain to come up
	I1009 17:57:09.664549   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:09.665051   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:09.665080   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:09.665342   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:09.665365   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:09.665316   16008 retry.go:31] will retry after 319.233412ms: waiting for domain to come up
	I1009 17:57:09.985878   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:09.986550   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:09.986576   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:09.986881   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:09.986907   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:09.986865   16008 retry.go:31] will retry after 473.746187ms: waiting for domain to come up
	I1009 17:57:10.462501   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:10.462855   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:10.462904   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:10.463128   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:10.463169   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:10.463112   16008 retry.go:31] will retry after 600.088204ms: waiting for domain to come up
	I1009 17:57:11.065191   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:11.065725   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:11.065746   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:11.066087   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:11.066180   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:11.066087   16008 retry.go:31] will retry after 658.412891ms: waiting for domain to come up
	I1009 17:57:11.726306   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:11.726928   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:11.726944   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:11.727242   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:11.727276   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:11.727219   16008 retry.go:31] will retry after 633.707062ms: waiting for domain to come up
	I1009 17:57:12.362969   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:12.363443   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:12.363465   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:12.363753   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:12.363799   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:12.363745   16008 retry.go:31] will retry after 892.212486ms: waiting for domain to come up
	I1009 17:57:13.258102   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:13.258670   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:13.258692   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:13.259019   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:13.259062   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:13.259000   16008 retry.go:31] will retry after 1.39941207s: waiting for domain to come up
	I1009 17:57:14.659824   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:14.660335   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:14.660362   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:14.660701   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:14.660729   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:14.660664   16008 retry.go:31] will retry after 1.20376448s: waiting for domain to come up
	I1009 17:57:15.866262   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:15.866686   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:15.866712   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:15.866998   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:15.867051   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:15.866982   16008 retry.go:31] will retry after 2.057453732s: waiting for domain to come up
	I1009 17:57:17.926178   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:17.926610   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:17.926642   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:17.926890   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:17.926921   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:17.926869   16008 retry.go:31] will retry after 1.768680602s: waiting for domain to come up
	I1009 17:57:19.698012   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:19.698594   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:19.698615   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:19.698951   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:19.698982   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:19.698913   16008 retry.go:31] will retry after 3.116158805s: waiting for domain to come up
	I1009 17:57:22.816854   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:22.817309   15980 main.go:141] libmachine: (addons-676842) DBG | no network interface addresses found for domain addons-676842 (source=lease)
	I1009 17:57:22.817335   15980 main.go:141] libmachine: (addons-676842) DBG | trying to list again with source=arp
	I1009 17:57:22.817570   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find current IP address of domain addons-676842 in network mk-addons-676842 (interfaces detected: [])
	I1009 17:57:22.817591   15980 main.go:141] libmachine: (addons-676842) DBG | I1009 17:57:22.817543   16008 retry.go:31] will retry after 3.379091733s: waiting for domain to come up
	I1009 17:57:26.201387   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.202077   15980 main.go:141] libmachine: (addons-676842) found domain IP: 192.168.39.66
	I1009 17:57:26.202098   15980 main.go:141] libmachine: (addons-676842) reserving static IP address...
	I1009 17:57:26.202110   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has current primary IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.202632   15980 main.go:141] libmachine: (addons-676842) DBG | unable to find host DHCP lease matching {name: "addons-676842", mac: "52:54:00:7c:95:ff", ip: "192.168.39.66"} in network mk-addons-676842
	I1009 17:57:26.400709   15980 main.go:141] libmachine: (addons-676842) DBG | Getting to WaitForSSH function...
	I1009 17:57:26.400742   15980 main.go:141] libmachine: (addons-676842) reserved static IP address 192.168.39.66 for domain addons-676842
	I1009 17:57:26.400753   15980 main.go:141] libmachine: (addons-676842) waiting for SSH...
	I1009 17:57:26.403528   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.403952   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:26.403984   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.404153   15980 main.go:141] libmachine: (addons-676842) DBG | Using SSH client type: external
	I1009 17:57:26.404180   15980 main.go:141] libmachine: (addons-676842) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa (-rw-------)
	I1009 17:57:26.404228   15980 main.go:141] libmachine: (addons-676842) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 17:57:26.404243   15980 main.go:141] libmachine: (addons-676842) DBG | About to run SSH command:
	I1009 17:57:26.404262   15980 main.go:141] libmachine: (addons-676842) DBG | exit 0
	I1009 17:57:26.545875   15980 main.go:141] libmachine: (addons-676842) DBG | SSH cmd err, output: <nil>: 
	I1009 17:57:26.546178   15980 main.go:141] libmachine: (addons-676842) domain creation complete
	I1009 17:57:26.546518   15980 main.go:141] libmachine: (addons-676842) Calling .GetConfigRaw
	I1009 17:57:26.547189   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:26.547449   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:26.547636   15980 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 17:57:26.547651   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:26.548962   15980 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 17:57:26.548979   15980 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 17:57:26.548987   15980 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 17:57:26.548994   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:26.551432   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.551825   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:26.551852   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.551997   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:26.552186   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.552310   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.552477   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:26.552639   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:26.552873   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:26.552889   15980 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 17:57:26.666128   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 17:57:26.666158   15980 main.go:141] libmachine: Detecting the provisioner...
	I1009 17:57:26.666168   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:26.669375   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.669790   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:26.669812   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.670086   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:26.670275   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.670447   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.670551   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:26.670727   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:26.671000   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:26.671015   15980 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 17:57:26.784878   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 17:57:26.784952   15980 main.go:141] libmachine: found compatible host: buildroot
	I1009 17:57:26.784964   15980 main.go:141] libmachine: Provisioning with buildroot...
	I1009 17:57:26.784973   15980 main.go:141] libmachine: (addons-676842) Calling .GetMachineName
	I1009 17:57:26.785209   15980 buildroot.go:166] provisioning hostname "addons-676842"
	I1009 17:57:26.785236   15980 main.go:141] libmachine: (addons-676842) Calling .GetMachineName
	I1009 17:57:26.785404   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:26.788597   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.789080   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:26.789121   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.789377   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:26.789528   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.789759   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.789914   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:26.790102   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:26.790322   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:26.790339   15980 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-676842 && echo "addons-676842" | sudo tee /etc/hostname
	I1009 17:57:26.923289   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-676842
	
	I1009 17:57:26.923345   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:26.926404   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.926753   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:26.926781   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:26.926987   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:26.927185   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.927353   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:26.927461   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:26.927654   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:26.927959   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:26.927986   15980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-676842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-676842/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-676842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 17:57:27.052508   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 17:57:27.052538   15980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 17:57:27.052556   15980 buildroot.go:174] setting up certificates
	I1009 17:57:27.052568   15980 provision.go:84] configureAuth start
	I1009 17:57:27.052577   15980 main.go:141] libmachine: (addons-676842) Calling .GetMachineName
	I1009 17:57:27.052863   15980 main.go:141] libmachine: (addons-676842) Calling .GetIP
	I1009 17:57:27.056295   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.056690   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.056721   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.056943   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:27.059844   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.060350   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.060385   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.060598   15980 provision.go:143] copyHostCerts
	I1009 17:57:27.060674   15980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 17:57:27.060802   15980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 17:57:27.060873   15980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 17:57:27.060934   15980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.addons-676842 san=[127.0.0.1 192.168.39.66 addons-676842 localhost minikube]
	I1009 17:57:27.160666   15980 provision.go:177] copyRemoteCerts
	I1009 17:57:27.160735   15980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 17:57:27.160756   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:27.163615   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.163959   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.164003   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.164194   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:27.164379   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:27.164529   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:27.164664   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:27.253610   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 17:57:27.285217   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 17:57:27.317704   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 17:57:27.348663   15980 provision.go:87] duration metric: took 296.081264ms to configureAuth
	I1009 17:57:27.348700   15980 buildroot.go:189] setting minikube options for container-runtime
	I1009 17:57:27.348915   15980 config.go:182] Loaded profile config "addons-676842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 17:57:27.349022   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:27.352226   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.352600   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.352629   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.352811   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:27.353003   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:27.353177   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:27.353303   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:27.353459   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:27.353697   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:27.353719   15980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 17:57:27.842006   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 17:57:27.842031   15980 main.go:141] libmachine: Checking connection to Docker...
	I1009 17:57:27.842063   15980 main.go:141] libmachine: (addons-676842) Calling .GetURL
	I1009 17:57:27.843488   15980 main.go:141] libmachine: (addons-676842) DBG | using libvirt version 8000000
	I1009 17:57:27.846080   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.846432   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.846467   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.846664   15980 main.go:141] libmachine: Docker is up and running!
	I1009 17:57:27.846685   15980 main.go:141] libmachine: Reticulating splines...
	I1009 17:57:27.846694   15980 client.go:171] duration metric: took 20.854898391s to LocalClient.Create
	I1009 17:57:27.846740   15980 start.go:167] duration metric: took 20.854970665s to libmachine.API.Create "addons-676842"
	I1009 17:57:27.846754   15980 start.go:293] postStartSetup for "addons-676842" (driver="kvm2")
	I1009 17:57:27.846768   15980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 17:57:27.846792   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:27.847034   15980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 17:57:27.847074   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:27.849316   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.849825   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:27.849853   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:27.850030   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:27.850228   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:27.850398   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:27.850514   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:27.940631   15980 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 17:57:27.945818   15980 info.go:137] Remote host: Buildroot 2025.02
	I1009 17:57:27.945845   15980 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 17:57:27.945926   15980 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 17:57:27.945951   15980 start.go:296] duration metric: took 99.190962ms for postStartSetup
	I1009 17:57:27.945979   15980 main.go:141] libmachine: (addons-676842) Calling .GetConfigRaw
	I1009 17:57:28.008619   15980 main.go:141] libmachine: (addons-676842) Calling .GetIP
	I1009 17:57:28.011806   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.012205   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:28.012237   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.012543   15980 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/config.json ...
	I1009 17:57:28.080763   15980 start.go:128] duration metric: took 21.105793128s to createHost
	I1009 17:57:28.080810   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:28.083985   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.084295   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:28.084341   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.084594   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:28.084837   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:28.085011   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:28.085161   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:28.085349   15980 main.go:141] libmachine: Using SSH client type: native
	I1009 17:57:28.085551   15980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I1009 17:57:28.085562   15980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 17:57:28.199156   15980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760032648.164561497
	
	I1009 17:57:28.199194   15980 fix.go:216] guest clock: 1760032648.164561497
	I1009 17:57:28.199205   15980 fix.go:229] Guest: 2025-10-09 17:57:28.164561497 +0000 UTC Remote: 2025-10-09 17:57:28.080791984 +0000 UTC m=+21.222924737 (delta=83.769513ms)
	I1009 17:57:28.199228   15980 fix.go:200] guest clock delta is within tolerance: 83.769513ms
	I1009 17:57:28.199233   15980 start.go:83] releasing machines lock for "addons-676842", held for 21.22434126s
	I1009 17:57:28.199252   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:28.199494   15980 main.go:141] libmachine: (addons-676842) Calling .GetIP
	I1009 17:57:28.202225   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.202598   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:28.202627   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.202752   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:28.203236   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:28.203389   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:28.203502   15980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 17:57:28.203545   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:28.203589   15980 ssh_runner.go:195] Run: cat /version.json
	I1009 17:57:28.203624   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:28.206658   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.206727   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.207126   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:28.207151   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.207181   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:28.207198   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:28.207340   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:28.207423   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:28.207497   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:28.207586   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:28.207627   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:28.207746   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:28.207751   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:28.207880   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:28.289666   15980 ssh_runner.go:195] Run: systemctl --version
	I1009 17:57:28.328091   15980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 17:57:28.495793   15980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 17:57:28.503324   15980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 17:57:28.503391   15980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 17:57:28.523869   15980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 17:57:28.523909   15980 start.go:495] detecting cgroup driver to use...
	I1009 17:57:28.523989   15980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 17:57:28.548608   15980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 17:57:28.566636   15980 docker.go:218] disabling cri-docker service (if available) ...
	I1009 17:57:28.566696   15980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 17:57:28.584953   15980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 17:57:28.601793   15980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 17:57:28.746862   15980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 17:57:28.959790   15980 docker.go:234] disabling docker service ...
	I1009 17:57:28.959856   15980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 17:57:28.976929   15980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 17:57:28.992904   15980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 17:57:29.145387   15980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 17:57:29.290434   15980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 17:57:29.306448   15980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 17:57:29.329885   15980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 17:57:29.329949   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.342876   15980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 17:57:29.342939   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.355748   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.368884   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.381612   15980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 17:57:29.395682   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.408860   15980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.429974   15980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:57:29.442582   15980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 17:57:29.453858   15980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 17:57:29.453935   15980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 17:57:29.473907   15980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 17:57:29.486108   15980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 17:57:29.628241   15980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 17:57:29.744960   15980 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 17:57:29.745079   15980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 17:57:29.751185   15980 start.go:563] Will wait 60s for crictl version
	I1009 17:57:29.751271   15980 ssh_runner.go:195] Run: which crictl
	I1009 17:57:29.755740   15980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 17:57:29.803349   15980 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 17:57:29.803473   15980 ssh_runner.go:195] Run: crio --version
	I1009 17:57:29.833508   15980 ssh_runner.go:195] Run: crio --version
	I1009 17:57:29.864997   15980 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 17:57:29.866395   15980 main.go:141] libmachine: (addons-676842) Calling .GetIP
	I1009 17:57:29.870054   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:29.870924   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:29.870946   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:29.871219   15980 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 17:57:29.876187   15980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 17:57:29.892290   15980 kubeadm.go:883] updating cluster {Name:addons-676842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-676842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 17:57:29.892390   15980 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:57:29.892430   15980 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 17:57:29.932527   15980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 17:57:29.932611   15980 ssh_runner.go:195] Run: which lz4
	I1009 17:57:29.937197   15980 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 17:57:29.942219   15980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 17:57:29.942259   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1009 17:57:31.368381   15980 crio.go:462] duration metric: took 1.431209864s to copy over tarball
	I1009 17:57:31.368457   15980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 17:57:33.059107   15980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.690627125s)
	I1009 17:57:33.059132   15980 crio.go:469] duration metric: took 1.690721273s to extract the tarball
	I1009 17:57:33.059141   15980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 17:57:33.100758   15980 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 17:57:33.144998   15980 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 17:57:33.145029   15980 cache_images.go:85] Images are preloaded, skipping loading
	I1009 17:57:33.145051   15980 kubeadm.go:934] updating node { 192.168.39.66 8443 v1.34.1 crio true true} ...
	I1009 17:57:33.145154   15980 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-676842 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-676842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 17:57:33.145219   15980 ssh_runner.go:195] Run: crio config
	I1009 17:57:33.193412   15980 cni.go:84] Creating CNI manager for ""
	I1009 17:57:33.193442   15980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 17:57:33.193460   15980 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 17:57:33.193486   15980 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.66 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-676842 NodeName:addons-676842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 17:57:33.193656   15980 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-676842"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.66"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.66"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 17:57:33.193736   15980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 17:57:33.205648   15980 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 17:57:33.205719   15980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 17:57:33.217483   15980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 17:57:33.238104   15980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 17:57:33.259449   15980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 17:57:33.279856   15980 ssh_runner.go:195] Run: grep 192.168.39.66	control-plane.minikube.internal$ /etc/hosts
	I1009 17:57:33.283859   15980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 17:57:33.298766   15980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 17:57:33.441306   15980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 17:57:33.473335   15980 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842 for IP: 192.168.39.66
	I1009 17:57:33.473360   15980 certs.go:195] generating shared ca certs ...
	I1009 17:57:33.473375   15980 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:33.473538   15980 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 17:57:33.674831   15980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt ...
	I1009 17:57:33.674859   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt: {Name:mk8d6c63971b590155ef793f332b7360c6032f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:33.675076   15980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key ...
	I1009 17:57:33.675092   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key: {Name:mka86df84fc2e231a7b1d5b6d58135c939e84926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:33.675203   15980 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 17:57:33.808428   15980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt ...
	I1009 17:57:33.808455   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt: {Name:mkf3723e5df502cbda33cfe11e908fb66089f6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:33.808654   15980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key ...
	I1009 17:57:33.808668   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key: {Name:mkc6782aa9332f0c309f41f313b628a564f1a3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:33.808772   15980 certs.go:257] generating profile certs ...
	I1009 17:57:33.808829   15980 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.key
	I1009 17:57:33.808853   15980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt with IP's: []
	I1009 17:57:34.044381   15980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt ...
	I1009 17:57:34.044411   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: {Name:mk58be93b9fb94a71f4e1dc70f0363e26d48f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.045106   15980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.key ...
	I1009 17:57:34.045126   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.key: {Name:mk3663bb4994cda4698d5c512b06efb11a652883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.045236   15980 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key.4cff7371
	I1009 17:57:34.045258   15980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt.4cff7371 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.66]
	I1009 17:57:34.466109   15980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt.4cff7371 ...
	I1009 17:57:34.466136   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt.4cff7371: {Name:mk851b00529e6812487f62fcb5b1123e8058507b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.466327   15980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key.4cff7371 ...
	I1009 17:57:34.466345   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key.4cff7371: {Name:mkac561ea7b9f9a4624e159e9121342182d6fa48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.466448   15980 certs.go:382] copying /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt.4cff7371 -> /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt
	I1009 17:57:34.466527   15980 certs.go:386] copying /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key.4cff7371 -> /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key
	I1009 17:57:34.466583   15980 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.key
	I1009 17:57:34.466601   15980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.crt with IP's: []
	I1009 17:57:34.591717   15980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.crt ...
	I1009 17:57:34.591745   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.crt: {Name:mkdae59f6a01536e513749b1beb6432d7b080432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.591937   15980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.key ...
	I1009 17:57:34.591952   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.key: {Name:mk2097506b9637f456e9ab887a2d3b5f3f5e316f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:34.592173   15980 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 17:57:34.592211   15980 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 17:57:34.592230   15980 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 17:57:34.592248   15980 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 17:57:34.592826   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 17:57:34.623404   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 17:57:34.653823   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 17:57:34.684664   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 17:57:34.716962   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 17:57:34.748426   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 17:57:34.778942   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 17:57:34.809846   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 17:57:34.841517   15980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 17:57:34.873232   15980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 17:57:34.895251   15980 ssh_runner.go:195] Run: openssl version
	I1009 17:57:34.902408   15980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 17:57:34.916601   15980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:57:34.922265   15980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:57:34.922342   15980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:57:34.930175   15980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 17:57:34.943862   15980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 17:57:34.949229   15980 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 17:57:34.949286   15980 kubeadm.go:400] StartCluster: {Name:addons-676842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-676842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:57:34.949349   15980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 17:57:34.949396   15980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 17:57:34.991715   15980 cri.go:89] found id: ""
	I1009 17:57:34.991797   15980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 17:57:35.004494   15980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 17:57:35.016831   15980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 17:57:35.028809   15980 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 17:57:35.028838   15980 kubeadm.go:157] found existing configuration files:
	
	I1009 17:57:35.028904   15980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 17:57:35.042125   15980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 17:57:35.042193   15980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 17:57:35.056147   15980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 17:57:35.067384   15980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 17:57:35.067442   15980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 17:57:35.083079   15980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 17:57:35.099180   15980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 17:57:35.099249   15980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 17:57:35.117028   15980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 17:57:35.130926   15980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 17:57:35.130995   15980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 17:57:35.142494   15980 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 17:57:35.291729   15980 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 17:57:47.172240   15980 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 17:57:47.172319   15980 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 17:57:47.172399   15980 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 17:57:47.172488   15980 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 17:57:47.172595   15980 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 17:57:47.172653   15980 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 17:57:47.174248   15980 out.go:252]   - Generating certificates and keys ...
	I1009 17:57:47.174321   15980 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 17:57:47.174381   15980 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 17:57:47.174439   15980 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 17:57:47.174488   15980 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 17:57:47.174540   15980 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 17:57:47.174591   15980 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 17:57:47.174660   15980 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 17:57:47.174774   15980 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-676842 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I1009 17:57:47.174824   15980 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 17:57:47.174939   15980 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-676842 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I1009 17:57:47.175026   15980 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 17:57:47.175124   15980 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 17:57:47.175193   15980 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 17:57:47.175241   15980 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 17:57:47.175283   15980 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 17:57:47.175334   15980 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 17:57:47.175381   15980 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 17:57:47.175436   15980 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 17:57:47.175485   15980 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 17:57:47.175554   15980 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 17:57:47.175608   15980 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 17:57:47.177008   15980 out.go:252]   - Booting up control plane ...
	I1009 17:57:47.177115   15980 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 17:57:47.177195   15980 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 17:57:47.177281   15980 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 17:57:47.177385   15980 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 17:57:47.177510   15980 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 17:57:47.177658   15980 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 17:57:47.177757   15980 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 17:57:47.177817   15980 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 17:57:47.177935   15980 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 17:57:47.178024   15980 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 17:57:47.178145   15980 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003043147s
	I1009 17:57:47.178277   15980 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 17:57:47.178384   15980 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.66:8443/livez
	I1009 17:57:47.178517   15980 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 17:57:47.178636   15980 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 17:57:47.178738   15980 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.411246624s
	I1009 17:57:47.178843   15980 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.916897731s
	I1009 17:57:47.178947   15980 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.004367195s
	I1009 17:57:47.179123   15980 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 17:57:47.179251   15980 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 17:57:47.179329   15980 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 17:57:47.179529   15980 kubeadm.go:318] [mark-control-plane] Marking the node addons-676842 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 17:57:47.179605   15980 kubeadm.go:318] [bootstrap-token] Using token: irah7c.7hlpgtmjyprophdo
	I1009 17:57:47.182165   15980 out.go:252]   - Configuring RBAC rules ...
	I1009 17:57:47.182290   15980 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 17:57:47.182402   15980 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 17:57:47.182540   15980 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 17:57:47.182684   15980 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 17:57:47.182784   15980 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 17:57:47.182863   15980 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 17:57:47.182970   15980 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 17:57:47.183010   15980 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 17:57:47.183074   15980 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 17:57:47.183084   15980 kubeadm.go:318] 
	I1009 17:57:47.183162   15980 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 17:57:47.183177   15980 kubeadm.go:318] 
	I1009 17:57:47.183285   15980 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 17:57:47.183294   15980 kubeadm.go:318] 
	I1009 17:57:47.183330   15980 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 17:57:47.183418   15980 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 17:57:47.183500   15980 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 17:57:47.183513   15980 kubeadm.go:318] 
	I1009 17:57:47.183595   15980 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 17:57:47.183602   15980 kubeadm.go:318] 
	I1009 17:57:47.183674   15980 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 17:57:47.183683   15980 kubeadm.go:318] 
	I1009 17:57:47.183768   15980 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 17:57:47.183856   15980 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 17:57:47.183912   15980 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 17:57:47.183918   15980 kubeadm.go:318] 
	I1009 17:57:47.183988   15980 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 17:57:47.184074   15980 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 17:57:47.184081   15980 kubeadm.go:318] 
	I1009 17:57:47.184149   15980 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token irah7c.7hlpgtmjyprophdo \
	I1009 17:57:47.184231   15980 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c48ca563301a9993a2f7da193dd1a5d16bad3f0e5e0903a06e9855a15622cfa2 \
	I1009 17:57:47.184249   15980 kubeadm.go:318] 	--control-plane 
	I1009 17:57:47.184257   15980 kubeadm.go:318] 
	I1009 17:57:47.184346   15980 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 17:57:47.184366   15980 kubeadm.go:318] 
	I1009 17:57:47.184477   15980 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token irah7c.7hlpgtmjyprophdo \
	I1009 17:57:47.184650   15980 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c48ca563301a9993a2f7da193dd1a5d16bad3f0e5e0903a06e9855a15622cfa2 
	I1009 17:57:47.184662   15980 cni.go:84] Creating CNI manager for ""
	I1009 17:57:47.184669   15980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 17:57:47.186531   15980 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 17:57:47.188123   15980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 17:57:47.203492   15980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 17:57:47.231504   15980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 17:57:47.231609   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:47.231658   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-676842 minikube.k8s.io/updated_at=2025_10_09T17_57_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=addons-676842 minikube.k8s.io/primary=true
	I1009 17:57:47.398286   15980 ops.go:34] apiserver oom_adj: -16
	I1009 17:57:47.398420   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:47.898716   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:48.398660   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:48.898707   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:49.398538   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:49.899164   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:50.399104   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:50.898635   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:51.399264   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:51.899081   15980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 17:57:51.991237   15980 kubeadm.go:1113] duration metric: took 4.759709931s to wait for elevateKubeSystemPrivileges
	I1009 17:57:51.991285   15980 kubeadm.go:402] duration metric: took 17.042001571s to StartCluster
	I1009 17:57:51.991310   15980 settings.go:142] acquiring lock: {Name:mke07af691f8cd3212916e5b2a1eaf75338ed4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:51.991474   15980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 17:57:51.991984   15980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/kubeconfig: {Name:mk1298c937114ca750ad76f4defd3e77cda49052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:57:51.992249   15980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 17:57:51.992280   15980 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 17:57:51.992383   15980 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 17:57:51.992526   15980 addons.go:69] Setting yakd=true in profile "addons-676842"
	I1009 17:57:51.992534   15980 addons.go:69] Setting gcp-auth=true in profile "addons-676842"
	I1009 17:57:51.992557   15980 addons.go:238] Setting addon yakd=true in "addons-676842"
	I1009 17:57:51.992563   15980 mustload.go:65] Loading cluster: addons-676842
	I1009 17:57:51.992567   15980 addons.go:69] Setting registry=true in profile "addons-676842"
	I1009 17:57:51.992590   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.992594   15980 addons.go:238] Setting addon registry=true in "addons-676842"
	I1009 17:57:51.992612   15980 config.go:182] Loaded profile config "addons-676842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 17:57:51.992607   15980 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-676842"
	I1009 17:57:51.992624   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.992635   15980 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-676842"
	I1009 17:57:51.992717   15980 config.go:182] Loaded profile config "addons-676842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 17:57:51.992753   15980 addons.go:69] Setting registry-creds=true in profile "addons-676842"
	I1009 17:57:51.992765   15980 addons.go:238] Setting addon registry-creds=true in "addons-676842"
	I1009 17:57:51.992789   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993082   15980 addons.go:69] Setting storage-provisioner=true in profile "addons-676842"
	I1009 17:57:51.993093   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993096   15980 addons.go:238] Setting addon storage-provisioner=true in "addons-676842"
	I1009 17:57:51.993110   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993119   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993127   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993132   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993141   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993146   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993151   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993175   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993204   15980 addons.go:69] Setting inspektor-gadget=true in profile "addons-676842"
	I1009 17:57:51.993222   15980 addons.go:238] Setting addon inspektor-gadget=true in "addons-676842"
	I1009 17:57:51.993228   15980 addons.go:69] Setting volcano=true in profile "addons-676842"
	I1009 17:57:51.993238   15980 addons.go:238] Setting addon volcano=true in "addons-676842"
	I1009 17:57:51.993247   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993255   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993256   15980 addons.go:69] Setting metrics-server=true in profile "addons-676842"
	I1009 17:57:51.993277   15980 addons.go:238] Setting addon metrics-server=true in "addons-676842"
	I1009 17:57:51.993299   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993503   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993538   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993095   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993602   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993605   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993613   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993632   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993636   15980 addons.go:69] Setting cloud-spanner=true in profile "addons-676842"
	I1009 17:57:51.993638   15980 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-676842"
	I1009 17:57:51.993645   15980 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-676842"
	I1009 17:57:51.993647   15980 addons.go:238] Setting addon cloud-spanner=true in "addons-676842"
	I1009 17:57:51.993655   15980 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-676842"
	I1009 17:57:51.993678   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993682   15980 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-676842"
	I1009 17:57:51.993683   15980 addons.go:69] Setting volumesnapshots=true in profile "addons-676842"
	I1009 17:57:51.993694   15980 addons.go:238] Setting addon volumesnapshots=true in "addons-676842"
	I1009 17:57:51.993695   15980 addons.go:69] Setting ingress=true in profile "addons-676842"
	I1009 17:57:51.993706   15980 addons.go:69] Setting default-storageclass=true in profile "addons-676842"
	I1009 17:57:51.993706   15980 addons.go:238] Setting addon ingress=true in "addons-676842"
	I1009 17:57:51.993708   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.993714   15980 addons.go:69] Setting ingress-dns=true in profile "addons-676842"
	I1009 17:57:51.993718   15980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-676842"
	I1009 17:57:51.993720   15980 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-676842"
	I1009 17:57:51.993723   15980 addons.go:238] Setting addon ingress-dns=true in "addons-676842"
	I1009 17:57:51.993732   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.993732   15980 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-676842"
	I1009 17:57:51.993819   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.993978   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.994024   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.994211   15980 out.go:179] * Verifying Kubernetes components...
	I1009 17:57:51.994260   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.994492   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.994587   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.994618   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.994645   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.994793   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.994848   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.994848   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:51.995633   15980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 17:57:51.999550   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.999593   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:51.999652   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:51.999682   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.004333   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.004394   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.008384   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.008440   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.008546   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.008676   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.008868   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.008918   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.031969   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I1009 17:57:52.032717   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.033842   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.033862   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.034940   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1009 17:57:52.035303   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.035597   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.035980   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I1009 17:57:52.036549   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.037272   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.037288   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.037354   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.038105   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I1009 17:57:52.039277   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.039918   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.039935   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.040174   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.040189   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.040639   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.040744   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I1009 17:57:52.040853   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I1009 17:57:52.041065   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.041224   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.041278   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.041509   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I1009 17:57:52.041792   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.041840   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.042744   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.042790   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.043002   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.043346   15980 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-676842"
	I1009 17:57:52.043385   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:52.043560   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.043601   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.043769   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.043810   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.044000   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.044261   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.044585   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.044642   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.044818   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40421
	I1009 17:57:52.044999   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.045018   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.046201   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.046279   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.048381   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I1009 17:57:52.048382   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:52.048818   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.048859   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.049211   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.049265   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.049530   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.050217   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.050354   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.050927   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.050942   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.051348   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.051976   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.052022   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.052346   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.052917   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.052960   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.056648   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41907
	I1009 17:57:52.057292   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.057910   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.057931   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.060809   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.061056   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37147
	I1009 17:57:52.061707   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.061751   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.062015   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.062528   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.062545   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.062967   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.062992   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.063558   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.063578   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.063674   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.063716   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.063983   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.064675   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.064735   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.066558   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I1009 17:57:52.067902   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.068768   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.068786   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.069343   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.070082   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.070120   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.077505   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I1009 17:57:52.081756   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.081954   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1009 17:57:52.085557   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.085594   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.085684   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I1009 17:57:52.085914   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I1009 17:57:52.086082   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.086207   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.086517   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.086697   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.086713   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.087932   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.088179   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.088683   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I1009 17:57:52.089411   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.090088   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.090135   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.090496   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.090990   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I1009 17:57:52.091224   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.091260   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.091504   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.092009   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I1009 17:57:52.092171   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.092593   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.092727   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.092944   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.092958   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.093355   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.093520   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.093535   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.093784   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.094326   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.094336   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.094456   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.094532   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.096553   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.096596   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.096564   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I1009 17:57:52.096794   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.096811   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.096882   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.097373   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.097390   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I1009 17:57:52.097738   15980 addons.go:238] Setting addon default-storageclass=true in "addons-676842"
	I1009 17:57:52.097778   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:52.097968   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.098072   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.098159   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.098181   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.098386   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 17:57:52.098343   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.100644   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.100670   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.101172   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.101916   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.101937   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 17:57:52.102012   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.102140   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.101956   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.102670   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.103350   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I1009 17:57:52.104217   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.102889   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.104360   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.104748   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.104832   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I1009 17:57:52.106145   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.106171   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.106542   15980 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 17:57:52.106542   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.106641   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.107047   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 17:57:52.107661   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I1009 17:57:52.107664   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.108075   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.108169   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.108300   15980 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 17:57:52.108314   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 17:57:52.108344   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.109229   15980 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 17:57:52.109335   15980 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 17:57:52.110081   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.110095   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
	I1009 17:57:52.110195   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.110216   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.110622   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.110681   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.110857   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.110888   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.111140   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.111676   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.111875   15980 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 17:57:52.111897   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 17:57:52.111922   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.112317   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 17:57:52.112931   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.113084   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.113100   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.113564   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.113574   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.113903   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.114108   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.114142   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I1009 17:57:52.114892   15980 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 17:57:52.115835   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 17:57:52.115944   15980 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 17:57:52.115954   15980 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 17:57:52.115973   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.117006   15980 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 17:57:52.117020   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 17:57:52.117061   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.117297   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.117678   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.118153   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.118171   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.118746   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.118918   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I1009 17:57:52.119030   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 17:57:52.119177   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.119237   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.119334   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.119516   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.120141   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.120194   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.121004   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.121248   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 17:57:52.121945   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.122818   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.122916   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.123310   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 17:57:52.123506   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.124732   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 17:57:52.124747   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 17:57:52.124764   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.124871   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.124915   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.125004   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.125025   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.125015   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46865
	I1009 17:57:52.125071   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.125987   15980 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 17:57:52.126097   15980 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1009 17:57:52.127569   15980 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 17:57:52.127640   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 17:57:52.127713   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.127842   15980 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 17:57:52.127855   15980 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 17:57:52.127878   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.128276   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.128448   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.128465   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.128549   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.128567   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.128599   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.128609   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.129193   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.129255   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.129303   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.129757   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.129923   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.130212   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.136328   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.136345   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I1009 17:57:52.136510   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.136544   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.136564   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.136627   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.138265   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.139054   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.139259   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.139273   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.139347   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.140095   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.140140   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.140183   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1009 17:57:52.140310   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.140346   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.140511   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.140522   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.140580   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.140592   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.140831   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.140855   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.141019   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.141024   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.141149   15980 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 17:57:52.141407   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.141463   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.141483   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.142074   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.142153   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I1009 17:57:52.142453   15980 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 17:57:52.142532   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 17:57:52.142603   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.142606   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.142851   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.143668   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.143768   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I1009 17:57:52.144225   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.144406   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.144486   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.144592   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:52.144771   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.144833   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.144856   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.144952   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.145112   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:57:52.145157   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.145327   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.145410   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.145568   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.145807   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.145870   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.146706   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.146852   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.147053   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.147134   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.147220   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.148007   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.148583   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.148642   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.148658   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.148723   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.148822   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:52.148837   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:52.149426   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:52.149454   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.149724   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.150109   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.149911   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:52.150184   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:52.150194   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:52.150200   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:52.150531   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:52.150575   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:52.150583   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	W1009 17:57:52.150692   15980 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 17:57:52.152103   15980 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 17:57:52.153350   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I1009 17:57:52.153582   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.154145   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.154516   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.154564   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.154708   15980 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 17:57:52.154790   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I1009 17:57:52.155198   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.155220   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.155221   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.155243   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.155450   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.155603   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.155716   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.155800   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.155852   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.155907   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.156082   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.156209   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.156220   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.156672   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.156879   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.158721   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.159198   15980 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 17:57:52.159225   15980 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 17:57:52.159247   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.161338   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I1009 17:57:52.161393   15980 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 17:57:52.161398   15980 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 17:57:52.161431   15980 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 17:57:52.161475   15980 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 17:57:52.161486   15980 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 17:57:52.161816   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.161796   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.162813   15980 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 17:57:52.162841   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 17:57:52.162861   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.162817   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 17:57:52.162927   15980 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 17:57:52.162940   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.163387   15980 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 17:57:52.163405   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 17:57:52.163494   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.163517   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.163432   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.163608   15980 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 17:57:52.164141   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.164427   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.164745   15980 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 17:57:52.164759   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 17:57:52.164775   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.169219   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.170329   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.170942   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.171122   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.171382   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.171659   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.171715   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.171755   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.171789   15980 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 17:57:52.171917   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.172097   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.172118   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.172220   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.172771   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.172803   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.172821   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.172836   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.172947   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.173164   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.173368   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I1009 17:57:52.173422   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.173429   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.173447   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.173488   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.173694   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.173706   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.173712   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.173788   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.173825   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.173932   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.174066   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.174067   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.174102   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.174218   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.174295   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:57:52.174301   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.174329   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.174263   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.174425   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.175235   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:57:52.175261   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:57:52.175623   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:57:52.175741   15980 out.go:179]   - Using image docker.io/busybox:stable
	I1009 17:57:52.175848   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:57:52.176895   15980 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 17:57:52.176914   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 17:57:52.176931   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.177740   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:57:52.177961   15980 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 17:57:52.177975   15980 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 17:57:52.178005   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:52.181742   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.181999   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.182324   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.182394   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.182561   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:52.182593   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:52.182600   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.182841   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.182895   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:52.183023   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.183081   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:52.183188   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.183242   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:52.183376   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:52.542511   15980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 17:57:52.542599   15980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 17:57:52.743523   15980 node_ready.go:35] waiting up to 6m0s for node "addons-676842" to be "Ready" ...
	I1009 17:57:52.750291   15980 node_ready.go:49] node "addons-676842" is "Ready"
	I1009 17:57:52.750335   15980 node_ready.go:38] duration metric: took 6.767007ms for node "addons-676842" to be "Ready" ...
	I1009 17:57:52.750354   15980 api_server.go:52] waiting for apiserver process to appear ...
	I1009 17:57:52.750411   15980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 17:57:52.855526   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 17:57:53.026147   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 17:57:53.076495   15980 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 17:57:53.076519   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 17:57:53.160809   15980 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 17:57:53.160836   15980 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 17:57:53.169021   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 17:57:53.318905   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 17:57:53.387943   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 17:57:53.456941   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 17:57:53.456970   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 17:57:53.535754   15980 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:57:53.535778   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 17:57:53.612295   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 17:57:53.675560   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 17:57:53.709529   15980 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 17:57:53.709559   15980 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 17:57:53.721250   15980 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 17:57:53.721276   15980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 17:57:53.764606   15980 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 17:57:53.764632   15980 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 17:57:53.883371   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 17:57:53.886181   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 17:57:53.954095   15980 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 17:57:53.954120   15980 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 17:57:53.963836   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 17:57:53.963869   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 17:57:54.002400   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:57:54.056549   15980 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 17:57:54.056573   15980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 17:57:54.103224   15980 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 17:57:54.103257   15980 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 17:57:54.104120   15980 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 17:57:54.104161   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 17:57:54.331281   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 17:57:54.331305   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 17:57:54.341175   15980 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 17:57:54.341200   15980 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 17:57:54.405132   15980 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 17:57:54.405159   15980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 17:57:54.442117   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 17:57:54.487259   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 17:57:54.710639   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 17:57:54.710671   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 17:57:54.737848   15980 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 17:57:54.737871   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 17:57:54.745219   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 17:57:54.745250   15980 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 17:57:55.071351   15980 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 17:57:55.071383   15980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 17:57:55.191260   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 17:57:55.240755   15980 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 17:57:55.240790   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 17:57:55.342415   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 17:57:55.342440   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 17:57:55.655397   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 17:57:55.655427   15980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 17:57:55.875760   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 17:57:55.922261   15980 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.171828599s)
	I1009 17:57:55.922308   15980 api_server.go:72] duration metric: took 3.92999514s to wait for apiserver process to appear ...
	I1009 17:57:55.922317   15980 api_server.go:88] waiting for apiserver healthz status ...
	I1009 17:57:55.922338   15980 api_server.go:253] Checking apiserver healthz at https://192.168.39.66:8443/healthz ...
	I1009 17:57:55.922266   15980 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.379632538s)
	I1009 17:57:55.922390   15980 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 17:57:55.940031   15980 api_server.go:279] https://192.168.39.66:8443/healthz returned 200:
	ok
	I1009 17:57:55.941267   15980 api_server.go:141] control plane version: v1.34.1
	I1009 17:57:55.941291   15980 api_server.go:131] duration metric: took 18.9667ms to wait for apiserver health ...
	I1009 17:57:55.941300   15980 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 17:57:55.958948   15980 system_pods.go:59] 10 kube-system pods found
	I1009 17:57:55.958983   15980 system_pods.go:61] "amd-gpu-device-plugin-ns4vt" [d1c5cfb9-6426-4b63-aeb5-cd922b35ee18] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1009 17:57:55.958991   15980 system_pods.go:61] "coredns-66bc5c9577-fkb85" [3aa8564a-0976-40d0-a6f6-1d38f0e7271b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 17:57:55.958997   15980 system_pods.go:61] "coredns-66bc5c9577-vclxq" [59422eba-75ea-4f86-b424-4b4344fbe0c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 17:57:55.959001   15980 system_pods.go:61] "etcd-addons-676842" [02993122-4628-4b50-bbe9-aadcdb0441ab] Running
	I1009 17:57:55.959007   15980 system_pods.go:61] "kube-apiserver-addons-676842" [439b46c3-0809-481a-bcf1-b9e8a6ae398d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 17:57:55.959012   15980 system_pods.go:61] "kube-controller-manager-addons-676842" [42e7ac60-59d8-48cd-92b6-fd22d8df4d6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 17:57:55.959016   15980 system_pods.go:61] "kube-proxy-6dblk" [5617bb0d-cf1c-451e-96a6-5bdf02363249] Running
	I1009 17:57:55.959020   15980 system_pods.go:61] "kube-scheduler-addons-676842" [2b048a83-37d4-405f-aea0-42bc8f4fd467] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 17:57:55.959025   15980 system_pods.go:61] "nvidia-device-plugin-daemonset-qj474" [c511ee7e-c0bc-4960-94e2-a78daede3a40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 17:57:55.959031   15980 system_pods.go:61] "registry-creds-764b6fb674-fvh8z" [8d06d83c-dd03-49c7-9549-810777667608] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 17:57:55.959050   15980 system_pods.go:74] duration metric: took 17.732892ms to wait for pod list to return data ...
	I1009 17:57:55.959063   15980 default_sa.go:34] waiting for default service account to be created ...
	I1009 17:57:55.966604   15980 default_sa.go:45] found service account: "default"
	I1009 17:57:55.966633   15980 default_sa.go:55] duration metric: took 7.562049ms for default service account to be created ...
	I1009 17:57:55.966645   15980 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 17:57:55.974640   15980 system_pods.go:86] 10 kube-system pods found
	I1009 17:57:55.974686   15980 system_pods.go:89] "amd-gpu-device-plugin-ns4vt" [d1c5cfb9-6426-4b63-aeb5-cd922b35ee18] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1009 17:57:55.974699   15980 system_pods.go:89] "coredns-66bc5c9577-fkb85" [3aa8564a-0976-40d0-a6f6-1d38f0e7271b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 17:57:55.974712   15980 system_pods.go:89] "coredns-66bc5c9577-vclxq" [59422eba-75ea-4f86-b424-4b4344fbe0c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 17:57:55.974720   15980 system_pods.go:89] "etcd-addons-676842" [02993122-4628-4b50-bbe9-aadcdb0441ab] Running
	I1009 17:57:55.974729   15980 system_pods.go:89] "kube-apiserver-addons-676842" [439b46c3-0809-481a-bcf1-b9e8a6ae398d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 17:57:55.974738   15980 system_pods.go:89] "kube-controller-manager-addons-676842" [42e7ac60-59d8-48cd-92b6-fd22d8df4d6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 17:57:55.974748   15980 system_pods.go:89] "kube-proxy-6dblk" [5617bb0d-cf1c-451e-96a6-5bdf02363249] Running
	I1009 17:57:55.974757   15980 system_pods.go:89] "kube-scheduler-addons-676842" [2b048a83-37d4-405f-aea0-42bc8f4fd467] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 17:57:55.974770   15980 system_pods.go:89] "nvidia-device-plugin-daemonset-qj474" [c511ee7e-c0bc-4960-94e2-a78daede3a40] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 17:57:55.974781   15980 system_pods.go:89] "registry-creds-764b6fb674-fvh8z" [8d06d83c-dd03-49c7-9549-810777667608] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 17:57:55.974795   15980 system_pods.go:126] duration metric: took 8.142966ms to wait for k8s-apps to be running ...
	I1009 17:57:55.974809   15980 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 17:57:55.974880   15980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 17:57:56.066438   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 17:57:56.066469   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 17:57:56.370679   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 17:57:56.370701   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 17:57:56.428515   15980 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-676842" context rescaled to 1 replicas
	I1009 17:57:56.541463   15980 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 17:57:56.541498   15980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 17:57:56.819383   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 17:57:57.742615   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.887045979s)
	I1009 17:57:57.742676   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.742692   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.742677   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.716498129s)
	I1009 17:57:57.742767   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.742785   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.742799   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.573739889s)
	I1009 17:57:57.742832   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.742847   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.742965   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.742979   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:57.742988   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.742995   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.743070   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.743082   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:57.743090   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.743098   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.743178   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:57.743201   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.743208   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:57.743219   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:57.743226   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:57.743301   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:57.743326   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.743343   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:57.743650   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.743667   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:57.743962   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:57.743996   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:57.744004   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:58.826315   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.507374704s)
	I1009 17:57:58.826344   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.438374099s)
	I1009 17:57:58.826371   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:58.826371   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:58.826382   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:58.826385   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:58.826682   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:58.826690   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:58.826705   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:58.826709   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:58.826713   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:58.826719   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:58.826713   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:58.826729   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:57:58.826732   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:58.826737   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:57:58.826923   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:57:58.826947   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:58.826956   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:57:58.826968   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:58.826956   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:57:59.583891   15980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 17:57:59.583931   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:57:59.587762   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:59.588221   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:57:59.588248   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:57:59.588503   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:57:59.588757   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:57:59.588918   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:57:59.589119   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:57:59.865380   15980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 17:57:59.987652   15980 addons.go:238] Setting addon gcp-auth=true in "addons-676842"
	I1009 17:57:59.987705   15980 host.go:66] Checking if "addons-676842" exists ...
	I1009 17:57:59.988003   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:57:59.988033   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:58:00.003384   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1009 17:58:00.003833   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:58:00.004275   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:58:00.004298   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:58:00.004670   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:58:00.005210   15980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 17:58:00.005242   15980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 17:58:00.019386   15980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I1009 17:58:00.019928   15980 main.go:141] libmachine: () Calling .GetVersion
	I1009 17:58:00.020434   15980 main.go:141] libmachine: Using API Version  1
	I1009 17:58:00.020461   15980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 17:58:00.020833   15980 main.go:141] libmachine: () Calling .GetMachineName
	I1009 17:58:00.021054   15980 main.go:141] libmachine: (addons-676842) Calling .GetState
	I1009 17:58:00.022961   15980 main.go:141] libmachine: (addons-676842) Calling .DriverName
	I1009 17:58:00.023210   15980 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 17:58:00.023236   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHHostname
	I1009 17:58:00.026467   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:58:00.026977   15980 main.go:141] libmachine: (addons-676842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:95:ff", ip: ""} in network mk-addons-676842: {Iface:virbr1 ExpiryTime:2025-10-09 18:57:22 +0000 UTC Type:0 Mac:52:54:00:7c:95:ff Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:addons-676842 Clientid:01:52:54:00:7c:95:ff}
	I1009 17:58:00.027000   15980 main.go:141] libmachine: (addons-676842) DBG | domain addons-676842 has defined IP address 192.168.39.66 and MAC address 52:54:00:7c:95:ff in network mk-addons-676842
	I1009 17:58:00.027247   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHPort
	I1009 17:58:00.027445   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHKeyPath
	I1009 17:58:00.027636   15980 main.go:141] libmachine: (addons-676842) Calling .GetSSHUsername
	I1009 17:58:00.027804   15980 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/addons-676842/id_rsa Username:docker}
	I1009 17:58:00.971427   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.295827165s)
	I1009 17:58:00.971470   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.088057143s)
	I1009 17:58:00.971482   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971496   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971513   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971538   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971540   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.085323667s)
	I1009 17:58:00.971565   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971609   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971671   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.969235233s)
	W1009 17:58:00.971700   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:00.971727   15980 retry.go:31] will retry after 131.498883ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:00.971740   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.529594167s)
	I1009 17:58:00.971758   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971769   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971774   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.484490433s)
	I1009 17:58:00.971794   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971803   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971891   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.971922   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.971932   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.971966   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.971982   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971986   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972007   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972021   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.972029   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971895   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972054   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972063   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.972070   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971966   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.972208   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.972240   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972248   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972255   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.972262   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971894   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.972315   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972322   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972329   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.972335   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.972405   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.972423   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.972440   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972447   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972523   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972536   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.972546   15980 addons.go:479] Verifying addon registry=true in "addons-676842"
	I1009 17:58:00.972784   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.972799   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.974455   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.974467   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.974597   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.362272357s)
	I1009 17:58:00.974620   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.974630   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.971990   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.78069178s)
	I1009 17:58:00.974684   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.974693   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.974753   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.974782   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.974792   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.974867   15980 addons.go:479] Verifying addon metrics-server=true in "addons-676842"
	I1009 17:58:00.974920   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.974927   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.975032   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.974943   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.975102   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.975117   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.975129   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.974950   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:00.975073   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:00.975387   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:00.975419   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.975428   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.975435   15980 addons.go:479] Verifying addon ingress=true in "addons-676842"
	I1009 17:58:00.976067   15980 out.go:179] * Verifying registry addon...
	I1009 17:58:00.976997   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:00.977012   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:00.977376   15980 out.go:179] * Verifying ingress addon...
	I1009 17:58:00.978025   15980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 17:58:00.978497   15980 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-676842 service yakd-dashboard -n yakd-dashboard
	
	I1009 17:58:00.979293   15980 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 17:58:01.012303   15980 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 17:58:01.012335   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:01.012316   15980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 17:58:01.012352   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:01.103654   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:01.106094   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:01.106116   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:01.106389   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:01.106405   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:01.106444   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	W1009 17:58:01.106518   15980 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1009 17:58:01.152904   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:01.152930   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:01.153282   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:01.153316   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:01.153332   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:01.354368   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.478558768s)
	I1009 17:58:01.354415   15980 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.379508021s)
	W1009 17:58:01.354421   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 17:58:01.354442   15980 system_svc.go:56] duration metric: took 5.379630324s WaitForService to wait for kubelet
	I1009 17:58:01.354445   15980 retry.go:31] will retry after 218.94962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 17:58:01.354454   15980 kubeadm.go:586] duration metric: took 9.362140139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 17:58:01.354485   15980 node_conditions.go:102] verifying NodePressure condition ...
	I1009 17:58:01.376077   15980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 17:58:01.376106   15980 node_conditions.go:123] node cpu capacity is 2
	I1009 17:58:01.376122   15980 node_conditions.go:105] duration metric: took 21.630444ms to run NodePressure ...
	I1009 17:58:01.376135   15980 start.go:241] waiting for startup goroutines ...
	I1009 17:58:01.565218   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:01.565324   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:01.574450   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 17:58:02.035866   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:02.038565   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:02.418625   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.599183118s)
	I1009 17:58:02.418642   15980 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.395410311s)
	I1009 17:58:02.418680   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:02.418696   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:02.418960   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:02.418979   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:02.418983   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:02.418993   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:02.419001   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:02.419230   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:02.419249   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:02.419259   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:02.419272   15980 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-676842"
	I1009 17:58:02.420099   15980 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 17:58:02.420932   15980 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 17:58:02.422298   15980 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 17:58:02.422912   15980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 17:58:02.423602   15980 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 17:58:02.423619   15980 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 17:58:02.455725   15980 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 17:58:02.455745   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:02.490254   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:02.522994   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:02.539449   15980 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 17:58:02.539473   15980 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 17:58:02.814002   15980 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 17:58:02.814034   15980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 17:58:02.875355   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 17:58:02.931212   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:02.988281   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:03.030591   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:03.428098   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:03.485196   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:03.485470   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:03.929173   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:03.986884   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:03.986938   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:04.433167   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:04.487014   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:04.487246   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:04.629837   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.055332099s)
	I1009 17:58:04.629899   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:04.629912   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:04.630149   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.526459926s)
	W1009 17:58:04.630189   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:04.630209   15980 retry.go:31] will retry after 438.500547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:04.630218   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:04.630254   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:04.630272   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:04.630286   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:04.630335   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:04.630521   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:04.630540   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:04.948290   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.072889085s)
	I1009 17:58:04.948344   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:04.948355   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:04.948673   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:04.948692   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:04.948698   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:58:04.948701   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:58:04.948743   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:58:04.949007   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:58:04.949021   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:58:04.950159   15980 addons.go:479] Verifying addon gcp-auth=true in "addons-676842"
	I1009 17:58:04.952712   15980 out.go:179] * Verifying gcp-auth addon...
	I1009 17:58:04.954607   15980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 17:58:04.960135   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:04.971098   15980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 17:58:04.971126   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:05.040543   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:05.040787   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:05.069943   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:05.432234   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:05.459884   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:05.485999   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:05.487915   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:05.931730   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:05.963633   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:06.029925   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:06.030651   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:06.428997   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:06.460032   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:06.483126   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:06.490059   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:06.648858   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.578858148s)
	W1009 17:58:06.648903   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:06.648925   15980 retry.go:31] will retry after 731.325513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:06.928804   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:07.033564   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:07.033759   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:07.034096   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:07.380542   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:07.427805   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:07.462270   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:07.482254   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:07.485179   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:07.928971   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:07.958868   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:07.984849   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:07.985706   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:08.428513   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:08.459583   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:08.490425   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:08.490659   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:08.697562   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.316976526s)
	W1009 17:58:08.697611   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:08.697636   15980 retry.go:31] will retry after 855.562622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:08.928415   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:08.958823   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:08.985281   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:08.987565   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:09.430189   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:09.458975   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:09.486546   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:09.487900   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:09.554164   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:09.927204   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:09.960492   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:09.983616   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:09.983796   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:10.429267   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:10.461680   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:10.482434   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:10.484969   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:10.589896   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.035695033s)
	W1009 17:58:10.589935   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:10.589955   15980 retry.go:31] will retry after 814.316441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:10.927666   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:10.960307   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:10.983741   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:10.986727   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:11.405204   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:11.430692   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:11.462027   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:11.483947   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:11.487669   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:11.927169   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:11.962110   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:11.981286   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:11.986265   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:12.431740   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:12.458709   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:12.485379   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:12.485423   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:12.505069   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.099802692s)
	W1009 17:58:12.505106   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:12.505124   15980 retry.go:31] will retry after 1.722558112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:12.931028   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:12.959664   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:12.986784   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:12.988097   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:13.428791   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:13.459646   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:13.483979   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:13.487716   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:13.929194   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:13.959433   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:13.983924   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:13.984541   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:14.228891   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:14.428600   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:14.460807   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:14.483528   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:14.485214   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:14.926491   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:14.959556   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:14.985012   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:14.985147   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:15.341082   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112136433s)
	W1009 17:58:15.341145   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:15.341170   15980 retry.go:31] will retry after 3.804861241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:15.525287   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:15.525359   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:15.527025   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:15.528425   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:15.931477   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:15.960580   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:15.987783   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:15.987955   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:17.072400   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:17.091824   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:17.091936   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:17.092008   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:17.094397   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:17.188200   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:17.188234   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:17.188331   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:17.428052   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:17.458946   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:17.482100   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:17.484372   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:17.928221   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:17.959578   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:17.983853   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:17.988538   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:18.431951   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:18.458997   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:18.490417   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:18.491468   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:18.929774   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:18.957888   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:18.986931   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:18.986956   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:19.146209   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:19.428673   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:19.459296   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:19.482732   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:19.485713   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:19.928608   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:19.960072   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:19.983850   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:19.983977   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:20.279849   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.13358921s)
	W1009 17:58:20.279887   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:20.279909   15980 retry.go:31] will retry after 5.558755367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:20.426760   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:20.457947   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:20.482947   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:20.483476   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:20.927684   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:20.963862   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:20.982937   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:20.984060   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:21.428071   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:21.457888   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:21.483646   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:21.484929   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:21.927081   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:21.958625   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:21.982502   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:21.983446   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:22.428196   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:22.458110   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:22.482568   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:22.483507   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:22.927544   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:22.959238   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:22.982174   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:22.982819   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:23.427782   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:23.461802   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:23.483959   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:23.484475   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:23.927292   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:23.959002   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:23.984241   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:23.985076   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:24.428475   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:24.461161   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:24.484150   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:24.487097   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:24.927228   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:24.960577   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:24.991931   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:24.993565   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:25.430214   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:25.461889   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:25.497635   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:25.501204   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:25.839479   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:25.926510   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:25.959196   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:25.982628   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:25.986094   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:26.428993   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:26.458650   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:26.482168   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:26.487012   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 17:58:26.782663   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:26.782706   15980 retry.go:31] will retry after 5.45462249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:26.926658   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:26.959440   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:26.981222   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:26.984126   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:27.426563   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:27.459745   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:27.483298   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:27.487707   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:27.928365   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:28.029329   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:28.029326   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:28.030156   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:28.433854   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:28.459881   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:28.482693   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:28.483448   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:28.927745   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:28.959002   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:28.982103   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:28.984501   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:29.427792   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:29.458188   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:29.482283   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:29.482372   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:29.927457   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:29.958209   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:29.982397   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:29.984789   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:30.428602   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:30.462484   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:30.483781   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:30.487199   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:30.926630   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:30.959947   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:30.987006   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:30.987827   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:31.429214   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:31.458303   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:31.481254   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:31.482709   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:31.931267   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:31.959905   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:31.983211   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:31.986728   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:32.237717   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:32.429177   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:32.460267   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:32.483394   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:32.485574   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:32.929889   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:32.962015   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:32.984499   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:32.985034   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 17:58:33.204300   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:33.204336   15980 retry.go:31] will retry after 7.710933439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:33.430583   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:33.459342   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:33.484625   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:33.487956   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:33.930202   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:33.958030   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:33.987792   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:33.988019   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:34.428991   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:34.459440   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:34.484003   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:34.484305   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:34.928014   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:34.958097   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:34.985603   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:34.986231   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:35.517279   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:35.518468   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:35.520503   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:35.521572   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:35.928554   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:35.960642   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:35.985217   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:35.985276   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:36.427991   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:36.461051   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:36.490339   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:36.501455   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:37.085198   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:37.089586   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:37.089640   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:37.090211   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:37.428428   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:37.459059   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:37.482994   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:37.483896   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:37.929353   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:37.958943   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:37.983767   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:37.984185   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:38.428389   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:38.459541   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:38.483457   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:38.483942   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:38.926522   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:38.959743   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:38.985584   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:38.986242   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:39.427168   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:39.458014   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:39.483404   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:39.483983   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:39.927495   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:39.958903   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:39.981779   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:39.984437   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:40.429919   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:40.458959   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:40.483470   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:40.489924   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:40.916439   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:40.926361   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:40.961192   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:40.981767   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:40.985520   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:41.428879   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:41.462319   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:41.482754   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:41.489938   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 17:58:41.871497   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:41.871548   15980 retry.go:31] will retry after 17.723684175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:58:41.927447   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:41.958239   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:41.981790   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:41.982411   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:42.427703   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:42.459150   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:42.482214   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:42.482802   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:42.938248   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:43.035463   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:43.035572   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:43.035675   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:43.427482   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:43.459254   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:43.482707   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:43.484536   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:43.927495   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:43.959165   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:43.983238   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:43.983327   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:44.430208   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:44.459432   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:44.485893   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:44.487096   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:44.928546   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:44.959346   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:44.985784   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:44.987356   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:45.427367   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:45.458541   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:45.484645   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:45.486409   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:45.985187   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:45.987248   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:45.987314   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:45.987534   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:46.427546   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:46.460348   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:46.484156   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 17:58:46.484231   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:46.930255   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:46.961655   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:46.981546   15980 kapi.go:107] duration metric: took 46.003517343s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 17:58:46.988777   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:47.429527   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:47.460733   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:47.483437   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:47.929188   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:47.960219   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:47.987718   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:48.427366   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:48.459212   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:48.486734   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:48.928050   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:48.958853   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:48.985107   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:49.427198   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:49.458344   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:49.483201   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:49.927145   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:49.958534   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:49.984325   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:50.430239   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:50.460940   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:50.485473   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:50.994274   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:50.994387   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:50.996706   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:51.427996   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:51.461260   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:51.528324   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:51.927351   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:51.958174   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:51.985633   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:52.437167   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:52.457881   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:52.485606   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:52.926689   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:52.958983   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:52.983569   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:53.431287   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:53.459234   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:53.488541   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:53.927706   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:53.958735   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:53.984103   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:54.433207   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:54.462447   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:54.483112   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:54.928787   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:54.958184   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:54.986639   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:55.431315   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:55.460983   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:55.483887   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:55.926274   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:55.960246   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:55.986161   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:56.430005   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:56.458772   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:56.484613   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:56.928193   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:56.959804   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:56.982827   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:57.430274   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:57.460455   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:57.484936   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:57.927239   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:57.958581   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:57.983014   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:58.433853   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:58.458674   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:58.483195   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:58.926954   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:58.958411   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:58.982860   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:59.427174   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:59.458109   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:59.485143   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:58:59.596172   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:58:59.926489   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:58:59.959866   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:58:59.987312   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:00.430327   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:00.460914   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:00.484655   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:00.851471   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.255255647s)
	W1009 17:59:00.851512   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:59:00.851542   15980 retry.go:31] will retry after 15.151557872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:59:00.928921   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:00.962433   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:00.988076   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:01.430830   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:01.459997   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:01.487197   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:01.928015   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:01.958731   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:01.983453   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:02.429710   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:02.460638   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:02.486322   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:02.927775   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:02.958536   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:02.984845   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:03.428928   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:03.459066   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:03.483266   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:03.927437   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:03.960529   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:03.983717   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:04.427350   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:04.459503   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:04.482680   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:04.930101   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:04.959631   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:04.983642   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:05.488186   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:05.488371   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:05.488681   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:05.928148   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:05.958183   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:05.983620   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:06.429852   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:06.459501   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:06.486940   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:06.929184   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:06.960269   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:06.985283   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:07.430075   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:07.460655   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:07.485134   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:07.929357   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:07.960106   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:07.986786   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:08.428115   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:08.532331   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:08.532353   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:08.928280   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:08.964398   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:09.031243   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:09.434894   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:09.462432   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:09.486467   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:09.927560   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:09.959758   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:09.986100   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:10.428978   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:10.461518   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:10.531621   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:10.930310   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:11.030402   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:11.030789   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:11.433749   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:11.459793   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:11.488534   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:11.928527   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:11.959520   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:11.983310   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:12.428773   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:12.458877   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:12.505596   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:12.926191   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:12.958992   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:12.983021   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:13.426365   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:13.505420   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:13.510830   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:13.927211   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:13.963957   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:13.987735   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:14.427223   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:14.458291   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:14.482992   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:14.927737   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:14.959870   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:14.984245   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:15.427929   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:15.458009   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:15.483374   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:15.930233   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:15.958658   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:15.984142   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:16.004240   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:59:16.428147   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:16.460887   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:16.484290   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:16.939651   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:16.961112   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:16.987636   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:17.143967   15980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.139676793s)
	W1009 17:59:17.144018   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:59:17.144065   15980 retry.go:31] will retry after 23.978804566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:59:17.427573   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:17.458827   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:17.484181   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:17.927723   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:17.958834   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:17.986645   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:18.429955   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:18.458647   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:18.483945   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:18.928309   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:18.959270   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:18.984065   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:19.427292   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 17:59:19.458318   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:19.482751   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:19.927983   15980 kapi.go:107] duration metric: took 1m17.50506573s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 17:59:19.958098   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:19.984118   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:20.460016   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:20.484160   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:20.960179   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:20.984152   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:21.458648   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:21.483560   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:21.958699   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:21.983623   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:22.458862   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:22.484134   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:22.959679   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:22.983304   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:23.457909   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:23.483087   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:23.958832   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:23.983697   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:24.458247   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:24.485701   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:24.958918   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:24.984185   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:25.459560   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:25.483315   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:25.959031   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:25.984519   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:26.458317   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:26.486563   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:26.959009   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:26.983005   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:27.457915   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:27.483147   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:27.959744   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:27.983282   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:28.458790   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:28.483964   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:28.959111   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:28.984003   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:29.459529   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:29.483609   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:29.958688   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:29.983124   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:30.458917   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:30.483951   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:30.959024   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:30.983777   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:31.458411   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:31.482444   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:31.958375   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:31.983233   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:32.458372   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:32.483724   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:32.958643   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:32.983250   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:33.458815   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:33.483743   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:33.959241   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:33.984100   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:34.458903   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:34.483883   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:34.959472   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:34.983130   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:35.459882   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:35.484269   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:35.960079   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:35.983927   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:36.459127   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:36.483733   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:36.959355   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:36.983255   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:37.459504   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:37.483166   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:37.958963   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:37.983958   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:38.459120   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:38.484047   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:38.959428   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:38.983404   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:39.457901   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:39.483544   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:39.958567   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:39.983478   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:40.458614   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:40.483309   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:40.958335   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:40.985535   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:41.123718   15980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 17:59:41.460485   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:41.486225   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 17:59:41.908206   15980 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 17:59:41.908283   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:59:41.908295   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:59:41.908704   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:59:41.908730   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 17:59:41.908741   15980 main.go:141] libmachine: Making call to close driver server
	I1009 17:59:41.908749   15980 main.go:141] libmachine: (addons-676842) Calling .Close
	I1009 17:59:41.909317   15980 main.go:141] libmachine: (addons-676842) DBG | Closing plugin on server side
	I1009 17:59:41.909512   15980 main.go:141] libmachine: Successfully made call to close driver server
	I1009 17:59:41.909611   15980 main.go:141] libmachine: Making call to close connection to plugin binary
	W1009 17:59:41.909723   15980 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 17:59:41.958516   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:41.984090   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:42.459375   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:42.483009   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:42.959450   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:42.982934   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:43.459441   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:43.483065   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:43.959507   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:43.983000   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:44.458890   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:44.483969   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:44.959737   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:44.983313   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:45.458050   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:45.485001   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:45.959327   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:45.984459   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:46.459501   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:46.484563   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:46.959472   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:46.983119   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:47.459162   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:47.485151   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:47.958645   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:47.983535   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:48.457789   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:48.483790   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:48.958981   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:48.984835   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:49.459276   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:49.484115   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:49.958445   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:49.983870   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:50.459394   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:50.483499   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:50.959196   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:50.986172   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:51.458968   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:51.483603   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:51.958278   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:51.983740   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:52.458910   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:52.483492   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:52.958415   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:52.982768   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:53.458564   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:53.483206   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:53.959267   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:53.983398   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:54.457797   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:54.483104   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:54.959331   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:54.982824   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:55.458996   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:55.484320   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:55.957769   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:55.983166   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:56.458615   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:56.483622   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:56.958992   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:56.984907   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:57.458833   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:57.483450   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:57.958093   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:57.985184   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:58.460165   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:58.484241   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:58.959199   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:58.984202   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:59.459578   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:59.483788   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 17:59:59.958648   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 17:59:59.983455   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:00.458223   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:00.483878   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:00.959368   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:00.982813   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:01.458229   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:01.484309   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:01.958054   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:01.984226   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:02.458467   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:02.483198   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:02.958630   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:02.982993   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:03.459791   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:03.483470   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:03.958584   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:03.983823   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:04.458444   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:04.482555   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:04.961012   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:04.983680   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:05.459075   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:05.483641   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:05.958093   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:05.984172   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:06.458697   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:06.483147   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:06.959722   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:06.983349   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:07.458692   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:07.482800   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:07.958397   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:07.982668   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:08.458424   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:08.483485   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:08.958758   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:08.987601   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:09.458880   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:09.484721   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:09.958194   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:09.984285   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:10.458409   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:10.483739   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:10.960389   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:10.983423   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:11.458076   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:11.483993   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:11.960032   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:11.983654   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:12.458058   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:12.483522   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:12.959081   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:12.983724   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:13.458446   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:13.482899   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:13.958146   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:13.984220   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:14.458970   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:14.483285   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:14.958999   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:14.983888   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:15.458767   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:15.484459   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:15.958447   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:15.983114   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:16.458594   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:16.483367   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:16.958676   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:17.060207   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:17.458665   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:17.484394   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:17.959617   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:17.984212   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:18.463098   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:18.489890   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:18.963310   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:18.987651   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:19.461138   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:19.484990   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:19.963313   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:19.982853   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:20.458815   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:20.485750   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:20.959929   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:20.988139   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:21.461478   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:21.486549   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:21.958748   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:21.983194   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:22.458712   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:22.485540   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:22.958710   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:22.984033   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:23.458995   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:23.488775   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:23.960146   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:23.985811   15980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:00:24.458822   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:24.484855   15980 kapi.go:107] duration metric: took 2m23.505559119s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:00:24.958348   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:25.460628   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:25.958286   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:26.461096   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:26.958150   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:27.463205   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:27.958482   15980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:00:28.459878   15980 kapi.go:107] duration metric: took 2m23.505275627s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:00:28.461376   15980 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-676842 cluster.
	I1009 18:00:28.462628   15980 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:00:28.463771   15980 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:00:28.465268   15980 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:00:28.466323   15980 addons.go:514] duration metric: took 2m36.473949893s for enable addons: enabled=[nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:00:28.466365   15980 start.go:246] waiting for cluster config update ...
	I1009 18:00:28.466384   15980 start.go:255] writing updated cluster config ...
	I1009 18:00:28.466656   15980 ssh_runner.go:195] Run: rm -f paused
	I1009 18:00:28.473648   15980 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:00:28.479465   15980 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vclxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.486606   15980 pod_ready.go:94] pod "coredns-66bc5c9577-vclxq" is "Ready"
	I1009 18:00:28.486640   15980 pod_ready.go:86] duration metric: took 7.139178ms for pod "coredns-66bc5c9577-vclxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.489116   15980 pod_ready.go:83] waiting for pod "etcd-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.497406   15980 pod_ready.go:94] pod "etcd-addons-676842" is "Ready"
	I1009 18:00:28.497438   15980 pod_ready.go:86] duration metric: took 8.296655ms for pod "etcd-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.500421   15980 pod_ready.go:83] waiting for pod "kube-apiserver-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.506118   15980 pod_ready.go:94] pod "kube-apiserver-addons-676842" is "Ready"
	I1009 18:00:28.506147   15980 pod_ready.go:86] duration metric: took 5.704014ms for pod "kube-apiserver-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.508285   15980 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:28.878023   15980 pod_ready.go:94] pod "kube-controller-manager-addons-676842" is "Ready"
	I1009 18:00:28.878074   15980 pod_ready.go:86] duration metric: took 369.76729ms for pod "kube-controller-manager-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:29.078660   15980 pod_ready.go:83] waiting for pod "kube-proxy-6dblk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:29.477670   15980 pod_ready.go:94] pod "kube-proxy-6dblk" is "Ready"
	I1009 18:00:29.477698   15980 pod_ready.go:86] duration metric: took 399.001556ms for pod "kube-proxy-6dblk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:29.678412   15980 pod_ready.go:83] waiting for pod "kube-scheduler-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:30.077652   15980 pod_ready.go:94] pod "kube-scheduler-addons-676842" is "Ready"
	I1009 18:00:30.077680   15980 pod_ready.go:86] duration metric: took 399.242713ms for pod "kube-scheduler-addons-676842" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:00:30.077691   15980 pod_ready.go:40] duration metric: took 1.604008671s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:00:30.125065   15980 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1009 18:00:30.126886   15980 out.go:179] * Done! kubectl is now configured to use "addons-676842" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.654272725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=372703c5-74df-4814-b153-d0a309d64630 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.654632460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995fc86d2affbf0680af61dba0d98eb82cc0a0fed5678793dbcb600a7b34bfcb,PodSandboxId:a8dda28e27cc577f5770649038c7ec8e405050408488b14a35174cd5334658d7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760032888261621910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42803e48-12dc-491c-8f14-f4a8f6b9b681,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c6607529129f37a8871683364f271a71a1636fc3a028b43243a0e822262b8a,PodSandboxId:60832a0b54c010bdb6bccad46685fafba74261a1227f1436bda571bb3bd16a0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760032834399632390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84f451ab-c2a7-43a2-9d98-5ba2301830da,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18ba07b34257fee353d29aa70725964d6d47b82dd519c8407a0df15cbcee993,PodSandboxId:6e7c98a83811f6f43594d11ebc1b2ac3e01458fb6a59432fefe424c8b8bbf21e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760032823829690554,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-7xdlv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296e9cc6-5685-4fb7-98fa-5926a2611b08,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:18cee74f81c7e75419ba00b13dad6d1d177f0db32db1c72d2c16a947c67930f1,PodSandboxId:16dbfe202341945499fcc6cca0d2bff9208ae07d28a6e5923c0ace88e4c97c22,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745724208958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mrdwp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12b6899b-7e6c-46e6-8e9e-c2423a7d4682,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9fbd8bbfa4f4daf7366228fd9542c7c837db18e2ae51727fb9ff748a6fd1d5,PodSandboxId:97f177cbdc5e2d77665315409926046d5b4a52b43778ef6a52b83704443f11ea,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745594232375,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mv75,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6e96b54-f195-4489-b016-e8e824ed37a0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a072658f06a281385e5199fa15f946ed6623b02adc1c416d039923f07d342c3,PodSandboxId:387e9b7d4425294d263f5e29989c163254b9144e63a5ea0c90d9a8112ae3363d,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760032743741121985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kctxd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14d40841-af19-4da7-b7d3-87b6d61ac2a0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31e2616651ef179966a127652045b00ec2737dcecc1d7753f2ffe82d098eca1,PodSandboxId:35fc705b97239aac5f353dd7e21b434401d134cfab010cf67ef3a01de3cdaaba,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-pa
th-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760032731392097915,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-bt88z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e795d75d-7632-4856-b93b-4e8a5c97a7b5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f088d17f46e2647de581b8135d13dba272f6fb26930135e5c4755f2f6b27040,PodSandboxId:9fdf025c2ec9c12b39a2bad1efb0418f176cc254019c1246e8ec0fa0e8c2401b,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760032718050474252,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efbd24b-ce67-4159-9a62-b2ceb6fc00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2734107176a8a5abfa14be2ec96f85927b0fa294a6e23b891ad2598a
6867bfb5,PodSandboxId:ac5533d74bd089490def19a2376873eb602cda12f694b83dc556cdde08c5879c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760032683171063363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1711e0e-9dae-4164-8418-a4ae434da45d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcec9f2ebfb3f3514a0cfcd30a6659d68c06d1383a1178dfd82073f23f872c8c,Pod
SandboxId:d59b0f39103091f5bde1b31d1a6f60fc73fe9e56728610604f0df0c53adc7f9e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760032682583060272,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ns4vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c5cfb9-6426-4b63-aeb5-cd922b35ee18,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17834a7f5dcc190bfdac7de3e9c8
8c88d53299675d398ec2c0e430ff7caacbf7,PodSandboxId:7821db75851f04e47aff5a0d287e265c6bc48cba5c2e3c056aacfa5512eaa5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760032673351769514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vclxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59422eba-75ea-4f86-b424-4b4344fbe0c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567c6c81405f261e8da7bae8db6ad88bfc44e0e874e0c791ea46255c3ad5938,PodSandboxId:5f1017ae3d3e03fecc51081950438e705b88aaa64bb159ae699282fc48873263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760032672612106419,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dblk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5617bb0d-cf1c-451e-96a6-5bdf02363249,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1163e800698f013bb5436486b14af72bcc8b9690faa4c944255976c5c254616,PodSandboxId:6cfd0b3c57b3362194a5df094ee930d5d6d93340aa3aec68054e4a63e687622f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760032660655095903,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca91226266e78e6215be7547b801606,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041cff018946d728797470a008acf54aee6ef1bca690aa6e516b40c511330f9e,PodSandboxId:b5b2bb4d6cc349d288d8d6a647f3acd0957ae7729056a3f25aeaaa3bd4c88a36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760032660623402458,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-676842,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 3df9c906039c22f4efe4c1f5e9233106,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee6b76d942c184e7d955094f55457dbff7b90e77e07dca7aa6ca9e54b0f4ea2,PodSandboxId:9270dc0a07b555c59ca6536795efec7a3fa7677cd02ba274ce87d920c6fe800f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760032660611559598,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ece99aed73119053b95c1de263e4e05d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43371a3e132f2d3bfb885daf03f6f75a5316f08aa28ae3e7b565219bae41fbcb,PodSandboxId:55c1030f2a2096692b859e6649896014ea57d3ccc81303bd021583853a82eed7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cb
adc97,State:CONTAINER_RUNNING,CreatedAt:1760032660604922053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdee461c1dd4d6468f5e5a4d8ea6a0c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=372703c5-74df-4814-b153-d0a309d64630 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.695900888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d225dbe-0eef-4586-84fe-21bce3ed0088 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.695998129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d225dbe-0eef-4586-84fe-21bce3ed0088 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.697637973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3a52504-fb74-418c-8b78-63d782f7a757 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.699564655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760033031699534632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3a52504-fb74-418c-8b78-63d782f7a757 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.700185994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8773da67-51f9-4058-bab8-65ff5507d169 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.700245399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8773da67-51f9-4058-bab8-65ff5507d169 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.700644074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995fc86d2affbf0680af61dba0d98eb82cc0a0fed5678793dbcb600a7b34bfcb,PodSandboxId:a8dda28e27cc577f5770649038c7ec8e405050408488b14a35174cd5334658d7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760032888261621910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42803e48-12dc-491c-8f14-f4a8f6b9b681,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c6607529129f37a8871683364f271a71a1636fc3a028b43243a0e822262b8a,PodSandboxId:60832a0b54c010bdb6bccad46685fafba74261a1227f1436bda571bb3bd16a0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760032834399632390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84f451ab-c2a7-43a2-9d98-5ba2301830da,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18ba07b34257fee353d29aa70725964d6d47b82dd519c8407a0df15cbcee993,PodSandboxId:6e7c98a83811f6f43594d11ebc1b2ac3e01458fb6a59432fefe424c8b8bbf21e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760032823829690554,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-7xdlv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296e9cc6-5685-4fb7-98fa-5926a2611b08,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:18cee74f81c7e75419ba00b13dad6d1d177f0db32db1c72d2c16a947c67930f1,PodSandboxId:16dbfe202341945499fcc6cca0d2bff9208ae07d28a6e5923c0ace88e4c97c22,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745724208958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mrdwp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12b6899b-7e6c-46e6-8e9e-c2423a7d4682,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9fbd8bbfa4f4daf7366228fd9542c7c837db18e2ae51727fb9ff748a6fd1d5,PodSandboxId:97f177cbdc5e2d77665315409926046d5b4a52b43778ef6a52b83704443f11ea,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745594232375,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mv75,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6e96b54-f195-4489-b016-e8e824ed37a0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a072658f06a281385e5199fa15f946ed6623b02adc1c416d039923f07d342c3,PodSandboxId:387e9b7d4425294d263f5e29989c163254b9144e63a5ea0c90d9a8112ae3363d,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760032743741121985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kctxd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14d40841-af19-4da7-b7d3-87b6d61ac2a0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31e2616651ef179966a127652045b00ec2737dcecc1d7753f2ffe82d098eca1,PodSandboxId:35fc705b97239aac5f353dd7e21b434401d134cfab010cf67ef3a01de3cdaaba,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-pa
th-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760032731392097915,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-bt88z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e795d75d-7632-4856-b93b-4e8a5c97a7b5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f088d17f46e2647de581b8135d13dba272f6fb26930135e5c4755f2f6b27040,PodSandboxId:9fdf025c2ec9c12b39a2bad1efb0418f176cc254019c1246e8ec0fa0e8c2401b,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760032718050474252,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efbd24b-ce67-4159-9a62-b2ceb6fc00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2734107176a8a5abfa14be2ec96f85927b0fa294a6e23b891ad2598a
6867bfb5,PodSandboxId:ac5533d74bd089490def19a2376873eb602cda12f694b83dc556cdde08c5879c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760032683171063363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1711e0e-9dae-4164-8418-a4ae434da45d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcec9f2ebfb3f3514a0cfcd30a6659d68c06d1383a1178dfd82073f23f872c8c,Pod
SandboxId:d59b0f39103091f5bde1b31d1a6f60fc73fe9e56728610604f0df0c53adc7f9e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760032682583060272,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ns4vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c5cfb9-6426-4b63-aeb5-cd922b35ee18,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17834a7f5dcc190bfdac7de3e9c8
8c88d53299675d398ec2c0e430ff7caacbf7,PodSandboxId:7821db75851f04e47aff5a0d287e265c6bc48cba5c2e3c056aacfa5512eaa5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760032673351769514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vclxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59422eba-75ea-4f86-b424-4b4344fbe0c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567c6c81405f261e8da7bae8db6ad88bfc44e0e874e0c791ea46255c3ad5938,PodSandboxId:5f1017ae3d3e03fecc51081950438e705b88aaa64bb159ae699282fc48873263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760032672612106419,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dblk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5617bb0d-cf1c-451e-96a6-5bdf02363249,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1163e800698f013bb5436486b14af72bcc8b9690faa4c944255976c5c254616,PodSandboxId:6cfd0b3c57b3362194a5df094ee930d5d6d93340aa3aec68054e4a63e687622f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760032660655095903,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca91226266e78e6215be7547b801606,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041cff018946d728797470a008acf54aee6ef1bca690aa6e516b40c511330f9e,PodSandboxId:b5b2bb4d6cc349d288d8d6a647f3acd0957ae7729056a3f25aeaaa3bd4c88a36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760032660623402458,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-676842,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 3df9c906039c22f4efe4c1f5e9233106,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee6b76d942c184e7d955094f55457dbff7b90e77e07dca7aa6ca9e54b0f4ea2,PodSandboxId:9270dc0a07b555c59ca6536795efec7a3fa7677cd02ba274ce87d920c6fe800f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760032660611559598,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ece99aed73119053b95c1de263e4e05d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43371a3e132f2d3bfb885daf03f6f75a5316f08aa28ae3e7b565219bae41fbcb,PodSandboxId:55c1030f2a2096692b859e6649896014ea57d3ccc81303bd021583853a82eed7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cb
adc97,State:CONTAINER_RUNNING,CreatedAt:1760032660604922053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdee461c1dd4d6468f5e5a4d8ea6a0c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8773da67-51f9-4058-bab8-65ff5507d169 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.741499139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3117656-b039-4f6e-94b1-6f9bd441bb10 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.741702561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3117656-b039-4f6e-94b1-6f9bd441bb10 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.743427030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d732c9c6-6911-49fe-b51a-5abbf6a3b2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.744685533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760033031744651590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d732c9c6-6911-49fe-b51a-5abbf6a3b2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.745397195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5b4586d-0596-483a-8965-6b8a8f3a8171 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.745463082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5b4586d-0596-483a-8965-6b8a8f3a8171 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.745882394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995fc86d2affbf0680af61dba0d98eb82cc0a0fed5678793dbcb600a7b34bfcb,PodSandboxId:a8dda28e27cc577f5770649038c7ec8e405050408488b14a35174cd5334658d7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760032888261621910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42803e48-12dc-491c-8f14-f4a8f6b9b681,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c6607529129f37a8871683364f271a71a1636fc3a028b43243a0e822262b8a,PodSandboxId:60832a0b54c010bdb6bccad46685fafba74261a1227f1436bda571bb3bd16a0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760032834399632390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84f451ab-c2a7-43a2-9d98-5ba2301830da,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18ba07b34257fee353d29aa70725964d6d47b82dd519c8407a0df15cbcee993,PodSandboxId:6e7c98a83811f6f43594d11ebc1b2ac3e01458fb6a59432fefe424c8b8bbf21e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760032823829690554,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-7xdlv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296e9cc6-5685-4fb7-98fa-5926a2611b08,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:18cee74f81c7e75419ba00b13dad6d1d177f0db32db1c72d2c16a947c67930f1,PodSandboxId:16dbfe202341945499fcc6cca0d2bff9208ae07d28a6e5923c0ace88e4c97c22,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745724208958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mrdwp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12b6899b-7e6c-46e6-8e9e-c2423a7d4682,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9fbd8bbfa4f4daf7366228fd9542c7c837db18e2ae51727fb9ff748a6fd1d5,PodSandboxId:97f177cbdc5e2d77665315409926046d5b4a52b43778ef6a52b83704443f11ea,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745594232375,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mv75,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6e96b54-f195-4489-b016-e8e824ed37a0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a072658f06a281385e5199fa15f946ed6623b02adc1c416d039923f07d342c3,PodSandboxId:387e9b7d4425294d263f5e29989c163254b9144e63a5ea0c90d9a8112ae3363d,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760032743741121985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kctxd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14d40841-af19-4da7-b7d3-87b6d61ac2a0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31e2616651ef179966a127652045b00ec2737dcecc1d7753f2ffe82d098eca1,PodSandboxId:35fc705b97239aac5f353dd7e21b434401d134cfab010cf67ef3a01de3cdaaba,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-pa
th-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760032731392097915,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-bt88z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e795d75d-7632-4856-b93b-4e8a5c97a7b5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f088d17f46e2647de581b8135d13dba272f6fb26930135e5c4755f2f6b27040,PodSandboxId:9fdf025c2ec9c12b39a2bad1efb0418f176cc254019c1246e8ec0fa0e8c2401b,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760032718050474252,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efbd24b-ce67-4159-9a62-b2ceb6fc00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2734107176a8a5abfa14be2ec96f85927b0fa294a6e23b891ad2598a
6867bfb5,PodSandboxId:ac5533d74bd089490def19a2376873eb602cda12f694b83dc556cdde08c5879c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760032683171063363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1711e0e-9dae-4164-8418-a4ae434da45d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcec9f2ebfb3f3514a0cfcd30a6659d68c06d1383a1178dfd82073f23f872c8c,Pod
SandboxId:d59b0f39103091f5bde1b31d1a6f60fc73fe9e56728610604f0df0c53adc7f9e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760032682583060272,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ns4vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c5cfb9-6426-4b63-aeb5-cd922b35ee18,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17834a7f5dcc190bfdac7de3e9c8
8c88d53299675d398ec2c0e430ff7caacbf7,PodSandboxId:7821db75851f04e47aff5a0d287e265c6bc48cba5c2e3c056aacfa5512eaa5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760032673351769514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vclxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59422eba-75ea-4f86-b424-4b4344fbe0c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567c6c81405f261e8da7bae8db6ad88bfc44e0e874e0c791ea46255c3ad5938,PodSandboxId:5f1017ae3d3e03fecc51081950438e705b88aaa64bb159ae699282fc48873263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760032672612106419,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dblk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5617bb0d-cf1c-451e-96a6-5bdf02363249,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1163e800698f013bb5436486b14af72bcc8b9690faa4c944255976c5c254616,PodSandboxId:6cfd0b3c57b3362194a5df094ee930d5d6d93340aa3aec68054e4a63e687622f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760032660655095903,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca91226266e78e6215be7547b801606,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041cff018946d728797470a008acf54aee6ef1bca690aa6e516b40c511330f9e,PodSandboxId:b5b2bb4d6cc349d288d8d6a647f3acd0957ae7729056a3f25aeaaa3bd4c88a36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760032660623402458,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-676842,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 3df9c906039c22f4efe4c1f5e9233106,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee6b76d942c184e7d955094f55457dbff7b90e77e07dca7aa6ca9e54b0f4ea2,PodSandboxId:9270dc0a07b555c59ca6536795efec7a3fa7677cd02ba274ce87d920c6fe800f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760032660611559598,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ece99aed73119053b95c1de263e4e05d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43371a3e132f2d3bfb885daf03f6f75a5316f08aa28ae3e7b565219bae41fbcb,PodSandboxId:55c1030f2a2096692b859e6649896014ea57d3ccc81303bd021583853a82eed7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cb
adc97,State:CONTAINER_RUNNING,CreatedAt:1760032660604922053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdee461c1dd4d6468f5e5a4d8ea6a0c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5b4586d-0596-483a-8965-6b8a8f3a8171 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.765290744Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.765630854Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.794728070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6ad56fc-48b4-4a7b-9113-5d6b1a4a7beb name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.794864233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6ad56fc-48b4-4a7b-9113-5d6b1a4a7beb name=/runtime.v1.RuntimeService/Version
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.796481131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5de6189-d369-490c-bb36-3f0aa60300ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.797857035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760033031797778866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5de6189-d369-490c-bb36-3f0aa60300ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.798446015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=713655a0-0696-496c-98dc-a80c5ba627c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.798505742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=713655a0-0696-496c-98dc-a80c5ba627c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:03:51 addons-676842 crio[825]: time="2025-10-09 18:03:51.799613457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995fc86d2affbf0680af61dba0d98eb82cc0a0fed5678793dbcb600a7b34bfcb,PodSandboxId:a8dda28e27cc577f5770649038c7ec8e405050408488b14a35174cd5334658d7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760032888261621910,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42803e48-12dc-491c-8f14-f4a8f6b9b681,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c6607529129f37a8871683364f271a71a1636fc3a028b43243a0e822262b8a,PodSandboxId:60832a0b54c010bdb6bccad46685fafba74261a1227f1436bda571bb3bd16a0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760032834399632390,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84f451ab-c2a7-43a2-9d98-5ba2301830da,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18ba07b34257fee353d29aa70725964d6d47b82dd519c8407a0df15cbcee993,PodSandboxId:6e7c98a83811f6f43594d11ebc1b2ac3e01458fb6a59432fefe424c8b8bbf21e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760032823829690554,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-7xdlv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 296e9cc6-5685-4fb7-98fa-5926a2611b08,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:18cee74f81c7e75419ba00b13dad6d1d177f0db32db1c72d2c16a947c67930f1,PodSandboxId:16dbfe202341945499fcc6cca0d2bff9208ae07d28a6e5923c0ace88e4c97c22,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745724208958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mrdwp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12b6899b-7e6c-46e6-8e9e-c2423a7d4682,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9fbd8bbfa4f4daf7366228fd9542c7c837db18e2ae51727fb9ff748a6fd1d5,PodSandboxId:97f177cbdc5e2d77665315409926046d5b4a52b43778ef6a52b83704443f11ea,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760032745594232375,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9mv75,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6e96b54-f195-4489-b016-e8e824ed37a0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a072658f06a281385e5199fa15f946ed6623b02adc1c416d039923f07d342c3,PodSandboxId:387e9b7d4425294d263f5e29989c163254b9144e63a5ea0c90d9a8112ae3363d,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760032743741121985,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-kctxd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 14d40841-af19-4da7-b7d3-87b6d61ac2a0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31e2616651ef179966a127652045b00ec2737dcecc1d7753f2ffe82d098eca1,PodSandboxId:35fc705b97239aac5f353dd7e21b434401d134cfab010cf67ef3a01de3cdaaba,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-pa
th-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760032731392097915,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-bt88z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e795d75d-7632-4856-b93b-4e8a5c97a7b5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f088d17f46e2647de581b8135d13dba272f6fb26930135e5c4755f2f6b27040,PodSandboxId:9fdf025c2ec9c12b39a2bad1efb0418f176cc254019c1246e8ec0fa0e8c2401b,Metadata:&ContainerMetadata{Name:minikube-ingres
s-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760032718050474252,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efbd24b-ce67-4159-9a62-b2ceb6fc00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2734107176a8a5abfa14be2ec96f85927b0fa294a6e23b891ad2598a
6867bfb5,PodSandboxId:ac5533d74bd089490def19a2376873eb602cda12f694b83dc556cdde08c5879c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760032683171063363,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1711e0e-9dae-4164-8418-a4ae434da45d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcec9f2ebfb3f3514a0cfcd30a6659d68c06d1383a1178dfd82073f23f872c8c,Pod
SandboxId:d59b0f39103091f5bde1b31d1a6f60fc73fe9e56728610604f0df0c53adc7f9e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760032682583060272,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ns4vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c5cfb9-6426-4b63-aeb5-cd922b35ee18,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17834a7f5dcc190bfdac7de3e9c8
8c88d53299675d398ec2c0e430ff7caacbf7,PodSandboxId:7821db75851f04e47aff5a0d287e265c6bc48cba5c2e3c056aacfa5512eaa5b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760032673351769514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vclxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59422eba-75ea-4f86-b424-4b4344fbe0c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol
\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6567c6c81405f261e8da7bae8db6ad88bfc44e0e874e0c791ea46255c3ad5938,PodSandboxId:5f1017ae3d3e03fecc51081950438e705b88aaa64bb159ae699282fc48873263,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760032672612106419,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6dblk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5617bb0d-cf1c-451e-96a6-5bdf02363249,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1163e800698f013bb5436486b14af72bcc8b9690faa4c944255976c5c254616,PodSandboxId:6cfd0b3c57b3362194a5df094ee930d5d6d93340aa3aec68054e4a63e687622f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760032660655095903,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca91226266e78e6215be7547b801606,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041cff018946d728797470a008acf54aee6ef1bca690aa6e516b40c511330f9e,PodSandboxId:b5b2bb4d6cc349d288d8d6a647f3acd0957ae7729056a3f25aeaaa3bd4c88a36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760032660623402458,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-676842,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 3df9c906039c22f4efe4c1f5e9233106,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee6b76d942c184e7d955094f55457dbff7b90e77e07dca7aa6ca9e54b0f4ea2,PodSandboxId:9270dc0a07b555c59ca6536795efec7a3fa7677cd02ba274ce87d920c6fe800f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760032660611559598,Labels:map[string]string{io.kubernetes.contain
er.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ece99aed73119053b95c1de263e4e05d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43371a3e132f2d3bfb885daf03f6f75a5316f08aa28ae3e7b565219bae41fbcb,PodSandboxId:55c1030f2a2096692b859e6649896014ea57d3ccc81303bd021583853a82eed7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cb
adc97,State:CONTAINER_RUNNING,CreatedAt:1760032660604922053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-676842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdee461c1dd4d6468f5e5a4d8ea6a0c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=713655a0-0696-496c-98dc-a80c5ba627c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	995fc86d2affb       docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e                              2 minutes ago       Running             nginx                     0                   a8dda28e27cc5       nginx
	61c6607529129       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   60832a0b54c01       busybox
	a18ba07b34257       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   6e7c98a83811f       ingress-nginx-controller-9cc49f96f-7xdlv
	18cee74f81c7e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              patch                     0                   16dbfe2023419       ingress-nginx-admission-patch-mrdwp
	4a9fbd8bbfa4f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   97f177cbdc5e2       ingress-nginx-admission-create-9mv75
	1a072658f06a2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   387e9b7d44252       gadget-kctxd
	f31e2616651ef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             5 minutes ago       Running             local-path-provisioner    0                   35fc705b97239       local-path-provisioner-648f6765c9-bt88z
	7f088d17f46e2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   9fdf025c2ec9c       kube-ingress-dns-minikube
	2734107176a8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   ac5533d74bd08       storage-provisioner
	bcec9f2ebfb3f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   d59b0f3910309       amd-gpu-device-plugin-ns4vt
	17834a7f5dcc1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   7821db75851f0       coredns-66bc5c9577-vclxq
	6567c6c81405f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   5f1017ae3d3e0       kube-proxy-6dblk
	c1163e800698f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   6cfd0b3c57b33       etcd-addons-676842
	041cff018946d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             6 minutes ago       Running             kube-controller-manager   0                   b5b2bb4d6cc34       kube-controller-manager-addons-676842
	fee6b76d942c1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             6 minutes ago       Running             kube-scheduler            0                   9270dc0a07b55       kube-scheduler-addons-676842
	43371a3e132f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             6 minutes ago       Running             kube-apiserver            0                   55c1030f2a209       kube-apiserver-addons-676842
	
	
	==> coredns [17834a7f5dcc190bfdac7de3e9c88c88d53299675d398ec2c0e430ff7caacbf7] <==
	[INFO] 10.244.0.8:44235 - 56855 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000238501s
	[INFO] 10.244.0.8:44235 - 20413 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000138723s
	[INFO] 10.244.0.8:44235 - 12723 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000181837s
	[INFO] 10.244.0.8:44235 - 39902 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000089243s
	[INFO] 10.244.0.8:44235 - 63794 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000173394s
	[INFO] 10.244.0.8:44235 - 22671 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000309731s
	[INFO] 10.244.0.8:44235 - 47894 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000185698s
	[INFO] 10.244.0.8:44826 - 57003 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160575s
	[INFO] 10.244.0.8:44826 - 56677 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000281401s
	[INFO] 10.244.0.8:52957 - 23615 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160076s
	[INFO] 10.244.0.8:52957 - 23843 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000190581s
	[INFO] 10.244.0.8:39821 - 23965 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165678s
	[INFO] 10.244.0.8:39821 - 24217 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000157887s
	[INFO] 10.244.0.8:41882 - 35276 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000266391s
	[INFO] 10.244.0.8:41882 - 35534 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172598s
	[INFO] 10.244.0.23:56050 - 60218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00063237s
	[INFO] 10.244.0.23:42859 - 34274 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000815502s
	[INFO] 10.244.0.23:48471 - 43371 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015658s
	[INFO] 10.244.0.23:36910 - 48748 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137606s
	[INFO] 10.244.0.23:52792 - 14920 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096543s
	[INFO] 10.244.0.23:36696 - 60739 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164974s
	[INFO] 10.244.0.23:59460 - 16167 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003277021s
	[INFO] 10.244.0.23:48056 - 1889 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00358376s
	[INFO] 10.244.0.27:59334 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000449806s
	[INFO] 10.244.0.27:46277 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000239947s
	
	
	==> describe nodes <==
	Name:               addons-676842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-676842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=addons-676842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T17_57_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-676842
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 17:57:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-676842
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:03:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:01:53 +0000   Thu, 09 Oct 2025 17:57:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:01:53 +0000   Thu, 09 Oct 2025 17:57:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:01:53 +0000   Thu, 09 Oct 2025 17:57:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:01:53 +0000   Thu, 09 Oct 2025 17:57:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    addons-676842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 022d6156cc4a4ea3bc8610dd75a25dbb
	  System UUID:                022d6156-cc4a-4ea3-bc86-10dd75a25dbb
	  Boot ID:                    b8801998-2ab9-42ed-8870-5a3b8a9a0367
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  default                     hello-world-app-5d498dc89-xm6lq             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  gadget                      gadget-kctxd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7xdlv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m52s
	  kube-system                 amd-gpu-device-plugin-ns4vt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 coredns-66bc5c9577-vclxq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m1s
	  kube-system                 etcd-addons-676842                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m6s
	  kube-system                 kube-apiserver-addons-676842                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-676842       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-proxy-6dblk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-scheduler-addons-676842                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  local-path-storage          local-path-provisioner-648f6765c9-bt88z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m58s  kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s   kubelet          Node addons-676842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s   kubelet          Node addons-676842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s   kubelet          Node addons-676842 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m5s   kubelet          Node addons-676842 status is now: NodeReady
	  Normal  RegisteredNode           6m2s   node-controller  Node addons-676842 event: Registered Node addons-676842 in Controller
	
	
	==> dmesg <==
	[  +6.753267] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.160303] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.227024] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.111585] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.183757] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 9 17:59] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.636749] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.543740] kauditd_printk_skb: 140 callbacks suppressed
	[Oct 9 18:00] kauditd_printk_skb: 52 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.433345] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.327416] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.793240] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.069978] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000109] kauditd_printk_skb: 44 callbacks suppressed
	[Oct 9 18:01] kauditd_printk_skb: 72 callbacks suppressed
	[  +4.918857] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.977829] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.466140] kauditd_printk_skb: 163 callbacks suppressed
	[  +5.150802] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.738863] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.077013] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.000060] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.858395] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 9 18:03] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [c1163e800698f013bb5436486b14af72bcc8b9690faa4c944255976c5c254616] <==
	{"level":"info","ts":"2025-10-09T17:58:37.828501Z","caller":"traceutil/trace.go:172","msg":"trace[202659597] transaction","detail":"{read_only:false; response_revision:933; number_of_response:1; }","duration":"124.210992ms","start":"2025-10-09T17:58:37.704274Z","end":"2025-10-09T17:58:37.828485Z","steps":["trace[202659597] 'process raft request'  (duration: 124.112048ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T17:58:45.978179Z","caller":"traceutil/trace.go:172","msg":"trace[975475313] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"140.900744ms","start":"2025-10-09T17:58:45.837266Z","end":"2025-10-09T17:58:45.978167Z","steps":["trace[975475313] 'process raft request'  (duration: 140.337157ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T17:58:50.984048Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"316.271155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T17:58:50.984419Z","caller":"traceutil/trace.go:172","msg":"trace[251001665] range","detail":"{range_begin:/registry/rolebindings; range_end:; response_count:0; response_revision:972; }","duration":"316.80018ms","start":"2025-10-09T17:58:50.667605Z","end":"2025-10-09T17:58:50.984405Z","steps":["trace[251001665] 'range keys from in-memory index tree'  (duration: 316.219351ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T17:58:50.984609Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T17:58:50.667590Z","time spent":"317.003656ms","remote":"127.0.0.1:60052","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/rolebindings\" limit:1 "}
	{"level":"info","ts":"2025-10-09T17:58:56.320670Z","caller":"traceutil/trace.go:172","msg":"trace[2139322552] linearizableReadLoop","detail":"{readStateIndex:1023; appliedIndex:1023; }","duration":"138.337305ms","start":"2025-10-09T17:58:56.182303Z","end":"2025-10-09T17:58:56.320640Z","steps":["trace[2139322552] 'read index received'  (duration: 138.332245ms)","trace[2139322552] 'applied index is now lower than readState.Index'  (duration: 4.302µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T17:58:56.320776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.457208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T17:58:56.320830Z","caller":"traceutil/trace.go:172","msg":"trace[1731171968] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:989; }","duration":"138.525565ms","start":"2025-10-09T17:58:56.182299Z","end":"2025-10-09T17:58:56.320824Z","steps":["trace[1731171968] 'agreement among raft nodes before linearized reading'  (duration: 138.412717ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T17:58:56.321013Z","caller":"traceutil/trace.go:172","msg":"trace[1611956985] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"225.72297ms","start":"2025-10-09T17:58:56.095280Z","end":"2025-10-09T17:58:56.321003Z","steps":["trace[1611956985] 'process raft request'  (duration: 225.601467ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T17:59:08.834671Z","caller":"traceutil/trace.go:172","msg":"trace[1418385633] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"282.250599ms","start":"2025-10-09T17:59:08.552404Z","end":"2025-10-09T17:59:08.834654Z","steps":["trace[1418385633] 'process raft request'  (duration: 282.085526ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T17:59:08.845652Z","caller":"traceutil/trace.go:172","msg":"trace[1047319605] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"282.53392ms","start":"2025-10-09T17:59:08.563105Z","end":"2025-10-09T17:59:08.845639Z","steps":["trace[1047319605] 'process raft request'  (duration: 282.437946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:00:20.656877Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.352094ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8436818570333001997 > lease_revoke:<id:751599ca1f1e186b>","response":"size:28"}
	{"level":"info","ts":"2025-10-09T18:00:33.366443Z","caller":"traceutil/trace.go:172","msg":"trace[1750234831] linearizableReadLoop","detail":"{readStateIndex:1334; appliedIndex:1334; }","duration":"104.269814ms","start":"2025-10-09T18:00:33.262132Z","end":"2025-10-09T18:00:33.366402Z","steps":["trace[1750234831] 'read index received'  (duration: 104.263302ms)","trace[1750234831] 'applied index is now lower than readState.Index'  (duration: 5.329µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T18:00:33.366645Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.495145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-10-09T18:00:33.366682Z","caller":"traceutil/trace.go:172","msg":"trace[26272633] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1280; }","duration":"104.539866ms","start":"2025-10-09T18:00:33.262129Z","end":"2025-10-09T18:00:33.366669Z","steps":["trace[26272633] 'agreement among raft nodes before linearized reading'  (duration: 104.402822ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:00:33.367001Z","caller":"traceutil/trace.go:172","msg":"trace[1105283509] transaction","detail":"{read_only:false; response_revision:1281; number_of_response:1; }","duration":"255.470206ms","start":"2025-10-09T18:00:33.111512Z","end":"2025-10-09T18:00:33.366982Z","steps":["trace[1105283509] 'process raft request'  (duration: 255.346851ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:01:08.095234Z","caller":"traceutil/trace.go:172","msg":"trace[1187450247] linearizableReadLoop","detail":"{readStateIndex:1555; appliedIndex:1555; }","duration":"325.106064ms","start":"2025-10-09T18:01:07.770106Z","end":"2025-10-09T18:01:08.095212Z","steps":["trace[1187450247] 'read index received'  (duration: 325.097319ms)","trace[1187450247] 'applied index is now lower than readState.Index'  (duration: 7.387µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T18:01:08.095538Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"325.393766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2025-10-09T18:01:08.095588Z","caller":"traceutil/trace.go:172","msg":"trace[1675077551] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1490; }","duration":"325.477942ms","start":"2025-10-09T18:01:07.770101Z","end":"2025-10-09T18:01:08.095579Z","steps":["trace[1675077551] 'agreement among raft nodes before linearized reading'  (duration: 325.292061ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:01:08.095618Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T18:01:07.770084Z","time spent":"325.523471ms","remote":"127.0.0.1:59664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":1435,"request content":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 "}
	{"level":"info","ts":"2025-10-09T18:01:08.096084Z","caller":"traceutil/trace.go:172","msg":"trace[22358456] transaction","detail":"{read_only:false; response_revision:1491; number_of_response:1; }","duration":"463.307917ms","start":"2025-10-09T18:01:07.632766Z","end":"2025-10-09T18:01:08.096073Z","steps":["trace[22358456] 'process raft request'  (duration: 463.155917ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:01:08.096220Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T18:01:07.632749Z","time spent":"463.372518ms","remote":"127.0.0.1:59700","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1490 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-09T18:01:08.098489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.169077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-0a963da0-6088-440e-83a8-98817e7b62a4\" limit:1 ","response":"range_response_count:1 size:4173"}
	{"level":"info","ts":"2025-10-09T18:01:08.099425Z","caller":"traceutil/trace.go:172","msg":"trace[1882122845] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-0a963da0-6088-440e-83a8-98817e7b62a4; range_end:; response_count:1; response_revision:1491; }","duration":"202.056406ms","start":"2025-10-09T18:01:07.897306Z","end":"2025-10-09T18:01:08.099362Z","steps":["trace[1882122845] 'agreement among raft nodes before linearized reading'  (duration: 201.09874ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:01:20.780756Z","caller":"traceutil/trace.go:172","msg":"trace[293609864] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"132.416383ms","start":"2025-10-09T18:01:20.648321Z","end":"2025-10-09T18:01:20.780738Z","steps":["trace[293609864] 'process raft request'  (duration: 132.247748ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:03:52 up 6 min,  0 users,  load average: 0.77, 1.05, 0.63
	Linux addons-676842 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [43371a3e132f2d3bfb885daf03f6f75a5316f08aa28ae3e7b565219bae41fbcb] <==
	 > logger="UnhandledError"
	E1009 17:58:51.055605       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.251.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.251.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.251.89:443: connect: connection refused" logger="UnhandledError"
	E1009 17:58:51.059417       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.251.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.251.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.251.89:443: connect: connection refused" logger="UnhandledError"
	I1009 17:58:51.193768       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 18:00:41.942740       1 conn.go:339] Error on socket receive: read tcp 192.168.39.66:8443->192.168.39.1:56040: use of closed network connection
	E1009 18:00:42.133195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.66:8443->192.168.39.1:56070: use of closed network connection
	I1009 18:00:51.532303       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.149.230"}
	I1009 18:01:17.880256       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:01:18.125312       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.25.98"}
	I1009 18:01:28.631905       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:01:52.079738       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1009 18:01:56.048943       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:01:56.049374       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:01:56.082991       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:01:56.083159       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:01:56.093355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:01:56.093462       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:01:56.120759       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:01:56.120849       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:01:56.164142       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:01:56.165849       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:01:57.083367       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:01:57.166885       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:01:57.181255       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1009 18:03:50.391098       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.51.19"}
	
	
	==> kube-controller-manager [041cff018946d728797470a008acf54aee6ef1bca690aa6e516b40c511330f9e] <==
	E1009 18:02:05.722007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:02:12.471653       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:12.472883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:02:14.398156       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:14.399179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:02:15.908852       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:15.910341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1009 18:02:21.700141       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1009 18:02:21.700265       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:02:21.747295       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1009 18:02:21.747325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1009 18:02:29.992433       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:29.993643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:02:30.629958       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:30.631056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:02:33.993521       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:02:33.994766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:03:04.946101       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:03:04.947369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:03:06.462583       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:03:06.463732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:03:19.935848       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:03:19.937150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:03:51.878041       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:03:51.879168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [6567c6c81405f261e8da7bae8db6ad88bfc44e0e874e0c791ea46255c3ad5938] <==
	I1009 17:57:53.350163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 17:57:53.452892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 17:57:53.452930       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.66"]
	E1009 17:57:53.453039       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 17:57:53.884898       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 17:57:53.885323       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 17:57:53.885362       1 server_linux.go:132] "Using iptables Proxier"
	I1009 17:57:53.916407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 17:57:53.916764       1 server.go:527] "Version info" version="v1.34.1"
	I1009 17:57:53.916857       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 17:57:53.924433       1 config.go:200] "Starting service config controller"
	I1009 17:57:53.924492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 17:57:53.924510       1 config.go:106] "Starting endpoint slice config controller"
	I1009 17:57:53.924514       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 17:57:53.924530       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 17:57:53.924533       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 17:57:53.925388       1 config.go:309] "Starting node config controller"
	I1009 17:57:53.925396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 17:57:53.925401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 17:57:54.024589       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 17:57:54.024630       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 17:57:54.025700       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [fee6b76d942c184e7d955094f55457dbff7b90e77e07dca7aa6ca9e54b0f4ea2] <==
	E1009 17:57:43.675264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 17:57:43.675304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 17:57:43.675339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 17:57:43.675381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 17:57:43.675417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 17:57:43.675463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 17:57:43.675553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 17:57:43.675586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 17:57:43.675620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 17:57:43.675659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 17:57:43.675785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 17:57:43.676294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 17:57:44.552472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 17:57:44.556654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 17:57:44.566920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 17:57:44.650782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 17:57:44.720083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 17:57:44.815979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 17:57:44.820639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 17:57:44.820742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 17:57:44.844094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 17:57:44.921873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 17:57:44.986476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 17:57:45.140211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1009 17:57:48.243439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 18:02:06 addons-676842 kubelet[1500]: E1009 18:02:06.843928    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032926843081107  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:07 addons-676842 kubelet[1500]: I1009 18:02:07.525596    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ns4vt" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:02:16 addons-676842 kubelet[1500]: E1009 18:02:16.847159    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032936846655502  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:16 addons-676842 kubelet[1500]: E1009 18:02:16.847197    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032936846655502  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:26 addons-676842 kubelet[1500]: E1009 18:02:26.850248    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032946849686310  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:26 addons-676842 kubelet[1500]: E1009 18:02:26.850712    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032946849686310  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:36 addons-676842 kubelet[1500]: E1009 18:02:36.853330    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032956852924691  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:36 addons-676842 kubelet[1500]: E1009 18:02:36.853358    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032956852924691  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:46 addons-676842 kubelet[1500]: E1009 18:02:46.856480    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032966856069340  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:46 addons-676842 kubelet[1500]: E1009 18:02:46.856536    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032966856069340  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:56 addons-676842 kubelet[1500]: E1009 18:02:56.858899    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032976858427631  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:02:56 addons-676842 kubelet[1500]: E1009 18:02:56.858941    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032976858427631  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:06 addons-676842 kubelet[1500]: E1009 18:03:06.862303    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032986861743447  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:06 addons-676842 kubelet[1500]: E1009 18:03:06.862330    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032986861743447  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:07 addons-676842 kubelet[1500]: I1009 18:03:07.525780    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:03:14 addons-676842 kubelet[1500]: I1009 18:03:14.531110    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ns4vt" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:03:16 addons-676842 kubelet[1500]: E1009 18:03:16.866421    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760032996865968719  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:16 addons-676842 kubelet[1500]: E1009 18:03:16.866458    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760032996865968719  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:26 addons-676842 kubelet[1500]: E1009 18:03:26.869661    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760033006868899411  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:26 addons-676842 kubelet[1500]: E1009 18:03:26.869704    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760033006868899411  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:36 addons-676842 kubelet[1500]: E1009 18:03:36.873066    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760033016872523373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:36 addons-676842 kubelet[1500]: E1009 18:03:36.873099    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760033016872523373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:46 addons-676842 kubelet[1500]: E1009 18:03:46.877919    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760033026877201462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:46 addons-676842 kubelet[1500]: E1009 18:03:46.877958    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760033026877201462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:03:50 addons-676842 kubelet[1500]: I1009 18:03:50.411017    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdc8\" (UniqueName: \"kubernetes.io/projected/cf08c44c-b327-4d19-9e25-8a4a2a57188b-kube-api-access-zgdc8\") pod \"hello-world-app-5d498dc89-xm6lq\" (UID: \"cf08c44c-b327-4d19-9e25-8a4a2a57188b\") " pod="default/hello-world-app-5d498dc89-xm6lq"
	
	
	==> storage-provisioner [2734107176a8a5abfa14be2ec96f85927b0fa294a6e23b891ad2598a6867bfb5] <==
	W1009 18:03:26.966294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:28.970757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:28.978265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:30.981273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:30.987232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:32.991871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:32.998093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:35.001489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:35.007528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:37.011159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:37.021220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:39.025473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:39.031154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:41.035125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:41.043126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:43.047303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:43.052769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:45.056771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:45.065165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:47.069044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:47.076189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:49.080455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:49.088578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:51.092667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:03:51.103299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-676842 -n addons-676842
helpers_test.go:269: (dbg) Run:  kubectl --context addons-676842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-xm6lq ingress-nginx-admission-create-9mv75 ingress-nginx-admission-patch-mrdwp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-676842 describe pod hello-world-app-5d498dc89-xm6lq ingress-nginx-admission-create-9mv75 ingress-nginx-admission-patch-mrdwp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-676842 describe pod hello-world-app-5d498dc89-xm6lq ingress-nginx-admission-create-9mv75 ingress-nginx-admission-patch-mrdwp: exit status 1 (86.66561ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-xm6lq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-676842/192.168.39.66
	Start Time:       Thu, 09 Oct 2025 18:03:50 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgdc8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zgdc8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-xm6lq to addons-676842
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9mv75" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mrdwp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-676842 describe pod hello-world-app-5d498dc89-xm6lq ingress-nginx-admission-create-9mv75 ingress-nginx-admission-patch-mrdwp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable ingress-dns --alsologtostderr -v=1: (1.906323296s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable ingress --alsologtostderr -v=1: (7.856010396s)
--- FAIL: TestAddons/parallel/Ingress (165.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image rm kicbase/echo-server:functional-396225 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 image rm kicbase/echo-server:functional-396225 --alsologtostderr: (3.088301563s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-396225" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.38s)

                                                
                                    
x
+
TestPreload (162.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1009 18:50:30.818258   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:50:50.957173   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m31.858609654s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591097 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-591097 image pull gcr.io/k8s-minikube/busybox: (3.375712228s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-591097
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-591097: (7.121709689s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591097 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591097 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.507389014s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591097 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-09 18:52:45.976777994 +0000 UTC m=+3377.151048206
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-591097 -n test-preload-591097
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591097 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-591097 logs -n 25: (1.107389442s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-752141 ssh -n multinode-752141-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ ssh     │ multinode-752141 ssh -n multinode-752141 sudo cat /home/docker/cp-test_multinode-752141-m03_multinode-752141.txt                                                                    │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ cp      │ multinode-752141 cp multinode-752141-m03:/home/docker/cp-test.txt multinode-752141-m02:/home/docker/cp-test_multinode-752141-m03_multinode-752141-m02.txt                           │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ ssh     │ multinode-752141 ssh -n multinode-752141-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ ssh     │ multinode-752141 ssh -n multinode-752141-m02 sudo cat /home/docker/cp-test_multinode-752141-m03_multinode-752141-m02.txt                                                            │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ node    │ multinode-752141 node stop m03                                                                                                                                                      │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:38 UTC │
	│ node    │ multinode-752141 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │ 09 Oct 25 18:39 UTC │
	│ node    │ list -p multinode-752141                                                                                                                                                            │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ stop    │ -p multinode-752141                                                                                                                                                                 │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:41 UTC │
	│ start   │ -p multinode-752141 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:41 UTC │ 09 Oct 25 18:44 UTC │
	│ node    │ list -p multinode-752141                                                                                                                                                            │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ node    │ multinode-752141 node delete m03                                                                                                                                                    │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ stop    │ multinode-752141 stop                                                                                                                                                               │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:47 UTC │
	│ start   │ -p multinode-752141 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:49 UTC │
	│ node    │ list -p multinode-752141                                                                                                                                                            │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:49 UTC │                     │
	│ start   │ -p multinode-752141-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-752141-m02 │ jenkins │ v1.37.0 │ 09 Oct 25 18:49 UTC │                     │
	│ start   │ -p multinode-752141-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-752141-m03 │ jenkins │ v1.37.0 │ 09 Oct 25 18:49 UTC │ 09 Oct 25 18:50 UTC │
	│ node    │ add -p multinode-752141                                                                                                                                                             │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:50 UTC │                     │
	│ delete  │ -p multinode-752141-m03                                                                                                                                                             │ multinode-752141-m03 │ jenkins │ v1.37.0 │ 09 Oct 25 18:50 UTC │ 09 Oct 25 18:50 UTC │
	│ delete  │ -p multinode-752141                                                                                                                                                                 │ multinode-752141     │ jenkins │ v1.37.0 │ 09 Oct 25 18:50 UTC │ 09 Oct 25 18:50 UTC │
	│ start   │ -p test-preload-591097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-591097  │ jenkins │ v1.37.0 │ 09 Oct 25 18:50 UTC │ 09 Oct 25 18:51 UTC │
	│ image   │ test-preload-591097 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-591097  │ jenkins │ v1.37.0 │ 09 Oct 25 18:51 UTC │ 09 Oct 25 18:51 UTC │
	│ stop    │ -p test-preload-591097                                                                                                                                                              │ test-preload-591097  │ jenkins │ v1.37.0 │ 09 Oct 25 18:51 UTC │ 09 Oct 25 18:51 UTC │
	│ start   │ -p test-preload-591097 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-591097  │ jenkins │ v1.37.0 │ 09 Oct 25 18:51 UTC │ 09 Oct 25 18:52 UTC │
	│ image   │ test-preload-591097 image list                                                                                                                                                      │ test-preload-591097  │ jenkins │ v1.37.0 │ 09 Oct 25 18:52 UTC │ 09 Oct 25 18:52 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:51:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:51:48.304070   47103 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:51:48.304300   47103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:48.304310   47103 out.go:374] Setting ErrFile to fd 2...
	I1009 18:51:48.304314   47103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:48.304488   47103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:51:48.304933   47103 out.go:368] Setting JSON to false
	I1009 18:51:48.305827   47103 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5648,"bootTime":1760030260,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:51:48.305914   47103 start.go:141] virtualization: kvm guest
	I1009 18:51:48.307700   47103 out.go:179] * [test-preload-591097] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:51:48.308797   47103 notify.go:220] Checking for updates...
	I1009 18:51:48.308811   47103 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:51:48.309930   47103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:51:48.311177   47103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:51:48.312427   47103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:51:48.313399   47103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:51:48.314494   47103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:51:48.315859   47103 config.go:182] Loaded profile config "test-preload-591097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 18:51:48.316245   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:48.316302   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:48.330452   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I1009 18:51:48.330991   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:48.331542   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:51:48.331571   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:48.331972   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:48.332164   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:51:48.333916   47103 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1009 18:51:48.335178   47103 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:51:48.335460   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:48.335494   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:48.348722   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I1009 18:51:48.349245   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:48.349770   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:51:48.349792   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:48.350210   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:48.350431   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:51:48.384898   47103 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 18:51:48.385964   47103 start.go:305] selected driver: kvm2
	I1009 18:51:48.385981   47103 start.go:925] validating driver "kvm2" against &{Name:test-preload-591097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-591097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:51:48.386110   47103 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:51:48.386799   47103 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:51:48.386883   47103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:51:48.400539   47103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:51:48.400566   47103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:51:48.414056   47103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:51:48.414455   47103 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:51:48.414485   47103 cni.go:84] Creating CNI manager for ""
	I1009 18:51:48.414545   47103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:51:48.414608   47103 start.go:349] cluster config:
	{Name:test-preload-591097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-591097 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:51:48.414719   47103 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:51:48.416814   47103 out.go:179] * Starting "test-preload-591097" primary control-plane node in "test-preload-591097" cluster
	I1009 18:51:48.417739   47103 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 18:51:48.809371   47103 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:51:48.809407   47103 cache.go:64] Caching tarball of preloaded images
	I1009 18:51:48.809596   47103 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 18:51:48.811241   47103 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1009 18:51:48.812377   47103 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 18:51:48.910306   47103 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1009 18:51:48.910358   47103 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:51:59.456331   47103 cache.go:67] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1009 18:51:59.456482   47103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/config.json ...
	I1009 18:51:59.456741   47103 start.go:360] acquireMachinesLock for test-preload-591097: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:51:59.456799   47103 start.go:364] duration metric: took 36.705µs to acquireMachinesLock for "test-preload-591097"
	I1009 18:51:59.456814   47103 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:51:59.456819   47103 fix.go:54] fixHost starting: 
	I1009 18:51:59.457117   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:59.457161   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:59.470538   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I1009 18:51:59.471101   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:59.471602   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:51:59.471621   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:59.471949   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:59.472127   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:51:59.472302   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetState
	I1009 18:51:59.474306   47103 fix.go:112] recreateIfNeeded on test-preload-591097: state=Stopped err=<nil>
	I1009 18:51:59.474329   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	W1009 18:51:59.474515   47103 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:51:59.476362   47103 out.go:252] * Restarting existing kvm2 VM for "test-preload-591097" ...
	I1009 18:51:59.476388   47103 main.go:141] libmachine: (test-preload-591097) Calling .Start
	I1009 18:51:59.476543   47103 main.go:141] libmachine: (test-preload-591097) starting domain...
	I1009 18:51:59.476562   47103 main.go:141] libmachine: (test-preload-591097) ensuring networks are active...
	I1009 18:51:59.477411   47103 main.go:141] libmachine: (test-preload-591097) Ensuring network default is active
	I1009 18:51:59.477810   47103 main.go:141] libmachine: (test-preload-591097) Ensuring network mk-test-preload-591097 is active
	I1009 18:51:59.478291   47103 main.go:141] libmachine: (test-preload-591097) getting domain XML...
	I1009 18:51:59.479435   47103 main.go:141] libmachine: (test-preload-591097) DBG | starting domain XML:
	I1009 18:51:59.479452   47103 main.go:141] libmachine: (test-preload-591097) DBG | <domain type='kvm'>
	I1009 18:51:59.479464   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <name>test-preload-591097</name>
	I1009 18:51:59.479472   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <uuid>c8464be8-a9e2-43b9-981a-191e3b63125a</uuid>
	I1009 18:51:59.479482   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 18:51:59.479491   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 18:51:59.479505   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:51:59.479514   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <os>
	I1009 18:51:59.479526   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:51:59.479538   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <boot dev='cdrom'/>
	I1009 18:51:59.479547   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <boot dev='hd'/>
	I1009 18:51:59.479556   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <bootmenu enable='no'/>
	I1009 18:51:59.479564   47103 main.go:141] libmachine: (test-preload-591097) DBG |   </os>
	I1009 18:51:59.479611   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <features>
	I1009 18:51:59.479625   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <acpi/>
	I1009 18:51:59.479631   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <apic/>
	I1009 18:51:59.479637   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <pae/>
	I1009 18:51:59.479645   47103 main.go:141] libmachine: (test-preload-591097) DBG |   </features>
	I1009 18:51:59.479656   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:51:59.479666   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <clock offset='utc'/>
	I1009 18:51:59.479676   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:51:59.479693   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:51:59.479702   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <on_crash>destroy</on_crash>
	I1009 18:51:59.479712   47103 main.go:141] libmachine: (test-preload-591097) DBG |   <devices>
	I1009 18:51:59.479725   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:51:59.479733   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <disk type='file' device='cdrom'>
	I1009 18:51:59.479757   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:51:59.479776   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/boot2docker.iso'/>
	I1009 18:51:59.479822   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:51:59.479846   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <readonly/>
	I1009 18:51:59.479859   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:51:59.479872   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </disk>
	I1009 18:51:59.479883   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <disk type='file' device='disk'>
	I1009 18:51:59.479892   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:51:59.479908   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/test-preload-591097.rawdisk'/>
	I1009 18:51:59.479919   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:51:59.479933   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:51:59.479943   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </disk>
	I1009 18:51:59.479954   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:51:59.479975   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:51:59.479986   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </controller>
	I1009 18:51:59.479994   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:51:59.480006   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:51:59.480024   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:51:59.480048   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </controller>
	I1009 18:51:59.480061   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <interface type='network'>
	I1009 18:51:59.480073   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <mac address='52:54:00:cf:81:c6'/>
	I1009 18:51:59.480086   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <source network='mk-test-preload-591097'/>
	I1009 18:51:59.480094   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <model type='virtio'/>
	I1009 18:51:59.480107   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:51:59.480118   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </interface>
	I1009 18:51:59.480129   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <interface type='network'>
	I1009 18:51:59.480144   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <mac address='52:54:00:98:5a:73'/>
	I1009 18:51:59.480162   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <source network='default'/>
	I1009 18:51:59.480183   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <model type='virtio'/>
	I1009 18:51:59.480211   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:51:59.480225   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </interface>
	I1009 18:51:59.480233   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <serial type='pty'>
	I1009 18:51:59.480246   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <target type='isa-serial' port='0'>
	I1009 18:51:59.480256   47103 main.go:141] libmachine: (test-preload-591097) DBG |         <model name='isa-serial'/>
	I1009 18:51:59.480267   47103 main.go:141] libmachine: (test-preload-591097) DBG |       </target>
	I1009 18:51:59.480281   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </serial>
	I1009 18:51:59.480291   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <console type='pty'>
	I1009 18:51:59.480303   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <target type='serial' port='0'/>
	I1009 18:51:59.480314   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </console>
	I1009 18:51:59.480326   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:51:59.480339   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:51:59.480350   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <audio id='1' type='none'/>
	I1009 18:51:59.480363   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <memballoon model='virtio'>
	I1009 18:51:59.480375   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:51:59.480384   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </memballoon>
	I1009 18:51:59.480399   47103 main.go:141] libmachine: (test-preload-591097) DBG |     <rng model='virtio'>
	I1009 18:51:59.480413   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:51:59.480434   47103 main.go:141] libmachine: (test-preload-591097) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:51:59.480445   47103 main.go:141] libmachine: (test-preload-591097) DBG |     </rng>
	I1009 18:51:59.480455   47103 main.go:141] libmachine: (test-preload-591097) DBG |   </devices>
	I1009 18:51:59.480465   47103 main.go:141] libmachine: (test-preload-591097) DBG | </domain>
	I1009 18:51:59.480472   47103 main.go:141] libmachine: (test-preload-591097) DBG | 
	I1009 18:52:00.917802   47103 main.go:141] libmachine: (test-preload-591097) waiting for domain to start...
	I1009 18:52:00.919417   47103 main.go:141] libmachine: (test-preload-591097) domain is now running
	I1009 18:52:00.919442   47103 main.go:141] libmachine: (test-preload-591097) waiting for IP...
	I1009 18:52:00.920756   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:00.921526   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has current primary IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:00.921543   47103 main.go:141] libmachine: (test-preload-591097) found domain IP: 192.168.39.4
	I1009 18:52:00.921557   47103 main.go:141] libmachine: (test-preload-591097) reserving static IP address...
	I1009 18:52:00.922127   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "test-preload-591097", mac: "52:54:00:cf:81:c6", ip: "192.168.39.4"} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:50:21 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:00.922162   47103 main.go:141] libmachine: (test-preload-591097) DBG | skip adding static IP to network mk-test-preload-591097 - found existing host DHCP lease matching {name: "test-preload-591097", mac: "52:54:00:cf:81:c6", ip: "192.168.39.4"}
	I1009 18:52:00.922176   47103 main.go:141] libmachine: (test-preload-591097) reserved static IP address 192.168.39.4 for domain test-preload-591097
	I1009 18:52:00.922199   47103 main.go:141] libmachine: (test-preload-591097) waiting for SSH...
	I1009 18:52:00.922214   47103 main.go:141] libmachine: (test-preload-591097) DBG | Getting to WaitForSSH function...
	I1009 18:52:00.925025   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:00.925469   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:50:21 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:00.925490   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:00.925650   47103 main.go:141] libmachine: (test-preload-591097) DBG | Using SSH client type: external
	I1009 18:52:00.925689   47103 main.go:141] libmachine: (test-preload-591097) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa (-rw-------)
	I1009 18:52:00.925727   47103 main.go:141] libmachine: (test-preload-591097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:52:00.925751   47103 main.go:141] libmachine: (test-preload-591097) DBG | About to run SSH command:
	I1009 18:52:00.925781   47103 main.go:141] libmachine: (test-preload-591097) DBG | exit 0
	I1009 18:52:11.179682   47103 main.go:141] libmachine: (test-preload-591097) DBG | SSH cmd err, output: exit status 255: 
	I1009 18:52:11.179720   47103 main.go:141] libmachine: (test-preload-591097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1009 18:52:11.179733   47103 main.go:141] libmachine: (test-preload-591097) DBG | command : exit 0
	I1009 18:52:11.179741   47103 main.go:141] libmachine: (test-preload-591097) DBG | err     : exit status 255
	I1009 18:52:11.179780   47103 main.go:141] libmachine: (test-preload-591097) DBG | output  : 
	I1009 18:52:14.181221   47103 main.go:141] libmachine: (test-preload-591097) DBG | Getting to WaitForSSH function...
	I1009 18:52:14.183954   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.184442   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.184489   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.184713   47103 main.go:141] libmachine: (test-preload-591097) DBG | Using SSH client type: external
	I1009 18:52:14.184749   47103 main.go:141] libmachine: (test-preload-591097) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa (-rw-------)
	I1009 18:52:14.184783   47103 main.go:141] libmachine: (test-preload-591097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:52:14.184793   47103 main.go:141] libmachine: (test-preload-591097) DBG | About to run SSH command:
	I1009 18:52:14.184837   47103 main.go:141] libmachine: (test-preload-591097) DBG | exit 0
	I1009 18:52:14.322302   47103 main.go:141] libmachine: (test-preload-591097) DBG | SSH cmd err, output: <nil>: 
	I1009 18:52:14.322769   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetConfigRaw
	I1009 18:52:14.323425   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetIP
	I1009 18:52:14.326436   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.326819   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.326852   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.327113   47103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/config.json ...
	I1009 18:52:14.327365   47103 machine.go:93] provisionDockerMachine start ...
	I1009 18:52:14.327388   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:14.327624   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:14.330212   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.330530   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.330570   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.330769   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:14.330935   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.331092   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.331254   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:14.331426   47103 main.go:141] libmachine: Using SSH client type: native
	I1009 18:52:14.331646   47103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1009 18:52:14.331658   47103 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:52:14.444222   47103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 18:52:14.444255   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetMachineName
	I1009 18:52:14.444481   47103 buildroot.go:166] provisioning hostname "test-preload-591097"
	I1009 18:52:14.444508   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetMachineName
	I1009 18:52:14.444737   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:14.447631   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.447977   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.448000   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.448225   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:14.448422   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.448562   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.448696   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:14.448861   47103 main.go:141] libmachine: Using SSH client type: native
	I1009 18:52:14.449114   47103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1009 18:52:14.449132   47103 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-591097 && echo "test-preload-591097" | sudo tee /etc/hostname
	I1009 18:52:14.582320   47103 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-591097
	
	I1009 18:52:14.582399   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:14.585725   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.586109   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.586139   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.586397   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:14.586576   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.586775   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:14.586942   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:14.587130   47103 main.go:141] libmachine: Using SSH client type: native
	I1009 18:52:14.587424   47103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1009 18:52:14.587452   47103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-591097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-591097/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-591097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:52:14.709893   47103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:52:14.709925   47103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:52:14.709946   47103 buildroot.go:174] setting up certificates
	I1009 18:52:14.709973   47103 provision.go:84] configureAuth start
	I1009 18:52:14.709986   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetMachineName
	I1009 18:52:14.710307   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetIP
	I1009 18:52:14.713459   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.713884   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.713909   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.714124   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:14.716459   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.716827   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:14.716857   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:14.717017   47103 provision.go:143] copyHostCerts
	I1009 18:52:14.717092   47103 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:52:14.717115   47103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:52:14.717189   47103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:52:14.717302   47103 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:52:14.717312   47103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:52:14.717340   47103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:52:14.717412   47103 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:52:14.717419   47103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:52:14.717441   47103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:52:14.717496   47103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.test-preload-591097 san=[127.0.0.1 192.168.39.4 localhost minikube test-preload-591097]
	I1009 18:52:15.094169   47103 provision.go:177] copyRemoteCerts
	I1009 18:52:15.094235   47103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:52:15.094258   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.097074   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.097411   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.097441   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.097652   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.097871   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.098011   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.098165   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:15.184949   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:52:15.215187   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:52:15.245456   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:52:15.274873   47103 provision.go:87] duration metric: took 564.884155ms to configureAuth
	I1009 18:52:15.274901   47103 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:52:15.275071   47103 config.go:182] Loaded profile config "test-preload-591097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 18:52:15.275137   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.278080   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.278485   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.278512   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.278746   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.278975   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.279157   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.279321   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.279477   47103 main.go:141] libmachine: Using SSH client type: native
	I1009 18:52:15.279671   47103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1009 18:52:15.279687   47103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:52:15.530705   47103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:52:15.530733   47103 machine.go:96] duration metric: took 1.203353485s to provisionDockerMachine
	I1009 18:52:15.530747   47103 start.go:293] postStartSetup for "test-preload-591097" (driver="kvm2")
	I1009 18:52:15.530760   47103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:52:15.530799   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:15.531165   47103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:52:15.531192   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.534388   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.534746   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.534770   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.534979   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.535210   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.535372   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.535522   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:15.624073   47103 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:52:15.628812   47103 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:52:15.628839   47103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:52:15.628936   47103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:52:15.629029   47103 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:52:15.629175   47103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:52:15.641115   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:52:15.670652   47103 start.go:296] duration metric: took 139.890123ms for postStartSetup
	I1009 18:52:15.670692   47103 fix.go:56] duration metric: took 16.213873264s for fixHost
	I1009 18:52:15.670712   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.673534   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.673876   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.673902   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.674097   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.674303   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.674452   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.674610   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.674770   47103 main.go:141] libmachine: Using SSH client type: native
	I1009 18:52:15.674962   47103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I1009 18:52:15.674971   47103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:52:15.786900   47103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760035935.752038840
	
	I1009 18:52:15.786921   47103 fix.go:216] guest clock: 1760035935.752038840
	I1009 18:52:15.786931   47103 fix.go:229] Guest: 2025-10-09 18:52:15.75203884 +0000 UTC Remote: 2025-10-09 18:52:15.670696254 +0000 UTC m=+27.405364654 (delta=81.342586ms)
	I1009 18:52:15.786994   47103 fix.go:200] guest clock delta is within tolerance: 81.342586ms
	I1009 18:52:15.787002   47103 start.go:83] releasing machines lock for "test-preload-591097", held for 16.330192321s
	I1009 18:52:15.787033   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:15.787290   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetIP
	I1009 18:52:15.790414   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.790855   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.790886   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.791085   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:15.791635   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:15.791818   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:15.791896   47103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:52:15.791936   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.792067   47103 ssh_runner.go:195] Run: cat /version.json
	I1009 18:52:15.792094   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:15.795078   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.795398   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.795426   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.795445   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.795599   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.795773   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.795927   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.795971   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:15.796016   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:15.796073   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:15.796237   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:15.796383   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:15.796532   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:15.796687   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:15.907230   47103 ssh_runner.go:195] Run: systemctl --version
	I1009 18:52:15.913627   47103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:52:16.059431   47103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:52:16.066337   47103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:52:16.066402   47103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:52:16.086964   47103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:52:16.086993   47103 start.go:495] detecting cgroup driver to use...
	I1009 18:52:16.087073   47103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:52:16.108242   47103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:52:16.126692   47103 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:52:16.126755   47103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:52:16.144494   47103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:52:16.161198   47103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:52:16.314925   47103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:52:16.531010   47103 docker.go:234] disabling docker service ...
	I1009 18:52:16.531107   47103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:52:16.549126   47103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:52:16.564961   47103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:52:16.722780   47103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:52:16.867799   47103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:52:16.883862   47103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:52:16.906987   47103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:52:16.907077   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:16.919786   47103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:52:16.919865   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:16.932581   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:16.945185   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:16.958354   47103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:52:16.972371   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:16.985489   47103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:17.006721   47103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:52:17.019740   47103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:52:17.030748   47103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:52:17.030826   47103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:52:17.051091   47103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:52:17.062836   47103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:52:17.204722   47103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:52:17.320650   47103 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:52:17.320749   47103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:52:17.326555   47103 start.go:563] Will wait 60s for crictl version
	I1009 18:52:17.326628   47103 ssh_runner.go:195] Run: which crictl
	I1009 18:52:17.330972   47103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:52:17.378107   47103 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:52:17.378211   47103 ssh_runner.go:195] Run: crio --version
	I1009 18:52:17.410497   47103 ssh_runner.go:195] Run: crio --version
	I1009 18:52:17.441715   47103 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1009 18:52:17.442872   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetIP
	I1009 18:52:17.445737   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:17.446129   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:17.446157   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:17.446420   47103 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 18:52:17.450868   47103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:52:17.466211   47103 kubeadm.go:883] updating cluster {Name:test-preload-591097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-591097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:52:17.466348   47103 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 18:52:17.466391   47103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:52:17.507343   47103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1009 18:52:17.507421   47103 ssh_runner.go:195] Run: which lz4
	I1009 18:52:17.512516   47103 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 18:52:17.517574   47103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 18:52:17.517629   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1009 18:52:18.975766   47103 crio.go:462] duration metric: took 1.463282155s to copy over tarball
	I1009 18:52:18.975834   47103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 18:52:20.780195   47103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.804320756s)
	I1009 18:52:20.780222   47103 crio.go:469] duration metric: took 1.804430791s to extract the tarball
	I1009 18:52:20.780229   47103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 18:52:20.820071   47103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:52:20.872166   47103 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:52:20.872191   47103 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:52:20.872199   47103 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.32.0 crio true true} ...
	I1009 18:52:20.872285   47103 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-591097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-591097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:52:20.872350   47103 ssh_runner.go:195] Run: crio config
	I1009 18:52:20.917587   47103 cni.go:84] Creating CNI manager for ""
	I1009 18:52:20.917616   47103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:52:20.917635   47103 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:52:20.917654   47103 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-591097 NodeName:test-preload-591097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:52:20.917763   47103 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-591097"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:52:20.917841   47103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1009 18:52:20.930681   47103 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:52:20.930778   47103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:52:20.943112   47103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 18:52:20.965500   47103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:52:20.987387   47103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1009 18:52:21.009414   47103 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I1009 18:52:21.013913   47103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:52:21.029369   47103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:52:21.179680   47103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:52:21.201454   47103 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097 for IP: 192.168.39.4
	I1009 18:52:21.201479   47103 certs.go:195] generating shared ca certs ...
	I1009 18:52:21.201495   47103 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:52:21.201680   47103 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:52:21.201734   47103 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:52:21.201744   47103 certs.go:257] generating profile certs ...
	I1009 18:52:21.201862   47103 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.key
	I1009 18:52:21.201947   47103 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/apiserver.key.ba60b1eb
	I1009 18:52:21.201996   47103 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/proxy-client.key
	I1009 18:52:21.202159   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:52:21.202199   47103 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:52:21.202205   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:52:21.202238   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:52:21.202269   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:52:21.202317   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:52:21.202377   47103 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:52:21.203177   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:52:21.240374   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:52:21.275401   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:52:21.309224   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:52:21.341047   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 18:52:21.372459   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:52:21.404476   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:52:21.436550   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:52:21.467992   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:52:21.499956   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:52:21.531143   47103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:52:21.561786   47103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:52:21.582892   47103 ssh_runner.go:195] Run: openssl version
	I1009 18:52:21.589648   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15263.pem && ln -fs /usr/share/ca-certificates/15263.pem /etc/ssl/certs/15263.pem"
	I1009 18:52:21.603190   47103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15263.pem
	I1009 18:52:21.608455   47103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:07 /usr/share/ca-certificates/15263.pem
	I1009 18:52:21.608516   47103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15263.pem
	I1009 18:52:21.616235   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15263.pem /etc/ssl/certs/51391683.0"
	I1009 18:52:21.630395   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152632.pem && ln -fs /usr/share/ca-certificates/152632.pem /etc/ssl/certs/152632.pem"
	I1009 18:52:21.644637   47103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152632.pem
	I1009 18:52:21.650266   47103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:07 /usr/share/ca-certificates/152632.pem
	I1009 18:52:21.650329   47103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152632.pem
	I1009 18:52:21.657958   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152632.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:52:21.671812   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:52:21.685517   47103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:52:21.690768   47103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:52:21.690841   47103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:52:21.698172   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:52:21.711438   47103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:52:21.717025   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:52:21.724810   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:52:21.732525   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:52:21.740116   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:52:21.747643   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:52:21.755162   47103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:52:21.762681   47103 kubeadm.go:400] StartCluster: {Name:test-preload-591097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-591097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:52:21.762807   47103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:52:21.762886   47103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:52:21.802569   47103 cri.go:89] found id: ""
	I1009 18:52:21.802658   47103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:52:21.816066   47103 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:52:21.816084   47103 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:52:21.816137   47103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:52:21.828697   47103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:52:21.829218   47103 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-591097" does not appear in /home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:52:21.829329   47103 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11352/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-591097" cluster setting kubeconfig missing "test-preload-591097" context setting]
	I1009 18:52:21.829586   47103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/kubeconfig: {Name:mk1298c937114ca750ad76f4defd3e77cda49052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:52:21.830132   47103 kapi.go:59] client config for test-preload-591097: &rest.Config{Host:"https://192.168.39.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:52:21.830509   47103 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:52:21.830522   47103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:52:21.830526   47103 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:52:21.830530   47103 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:52:21.830534   47103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:52:21.830864   47103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:52:21.842992   47103 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.4
	I1009 18:52:21.843032   47103 kubeadm.go:1160] stopping kube-system containers ...
	I1009 18:52:21.843057   47103 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 18:52:21.843111   47103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:52:21.883863   47103 cri.go:89] found id: ""
	I1009 18:52:21.883930   47103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:52:21.908348   47103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:52:21.920676   47103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:52:21.920706   47103 kubeadm.go:157] found existing configuration files:
	
	I1009 18:52:21.920765   47103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:52:21.932059   47103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:52:21.932118   47103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:52:21.944240   47103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:52:21.956243   47103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:52:21.956302   47103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:52:21.969305   47103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:52:21.981509   47103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:52:21.981599   47103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:52:21.994212   47103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:52:22.005857   47103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:52:22.005935   47103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:52:22.017776   47103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:52:22.029986   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:22.085735   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:22.758362   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:23.006964   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:23.082328   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:23.153806   47103 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:52:23.153886   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:23.654313   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:24.154564   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:24.653993   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:25.154823   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:25.654226   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:25.686494   47103 api_server.go:72] duration metric: took 2.532703427s to wait for apiserver process to appear ...
	I1009 18:52:25.686525   47103 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:52:25.686547   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:25.687147   47103 api_server.go:269] stopped: https://192.168.39.4:8443/healthz: Get "https://192.168.39.4:8443/healthz": dial tcp 192.168.39.4:8443: connect: connection refused
	I1009 18:52:26.186831   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:28.598423   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 18:52:28.598454   47103 api_server.go:103] status: https://192.168.39.4:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 18:52:28.598471   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:28.681131   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 18:52:28.681163   47103 api_server.go:103] status: https://192.168.39.4:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 18:52:28.687442   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:28.713880   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 18:52:28.713906   47103 api_server.go:103] status: https://192.168.39.4:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 18:52:29.187630   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:29.196014   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 18:52:29.196053   47103 api_server.go:103] status: https://192.168.39.4:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 18:52:29.687266   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:29.701663   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 18:52:29.701693   47103 api_server.go:103] status: https://192.168.39.4:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 18:52:30.187410   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:30.192303   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I1009 18:52:30.199235   47103 api_server.go:141] control plane version: v1.32.0
	I1009 18:52:30.199262   47103 api_server.go:131] duration metric: took 4.512730441s to wait for apiserver health ...
	I1009 18:52:30.199271   47103 cni.go:84] Creating CNI manager for ""
	I1009 18:52:30.199278   47103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:52:30.201185   47103 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 18:52:30.202426   47103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 18:52:30.217255   47103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 18:52:30.248434   47103 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:52:30.258586   47103 system_pods.go:59] 7 kube-system pods found
	I1009 18:52:30.258640   47103 system_pods.go:61] "coredns-668d6bf9bc-lvlbf" [3471d74c-151d-4404-b943-13fec6127c40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:52:30.258652   47103 system_pods.go:61] "etcd-test-preload-591097" [de923427-5c5f-4d25-8b10-03b879745ec0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 18:52:30.258660   47103 system_pods.go:61] "kube-apiserver-test-preload-591097" [91f1b85c-1bc6-41d1-98cc-292c4f187033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 18:52:30.258666   47103 system_pods.go:61] "kube-controller-manager-test-preload-591097" [dcc4f8d0-324e-417f-bd1f-6c6dca0325e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 18:52:30.258672   47103 system_pods.go:61] "kube-proxy-d4wr2" [3306837f-fe76-43df-a443-ea713cc73684] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 18:52:30.258678   47103 system_pods.go:61] "kube-scheduler-test-preload-591097" [31543ae7-91b6-48a9-b007-ce88f18cf4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 18:52:30.258681   47103 system_pods.go:61] "storage-provisioner" [54727b1d-7e52-461e-9ef1-e77d9071c2c1] Running
	I1009 18:52:30.258687   47103 system_pods.go:74] duration metric: took 10.228826ms to wait for pod list to return data ...
	I1009 18:52:30.258695   47103 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:52:30.272349   47103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 18:52:30.272378   47103 node_conditions.go:123] node cpu capacity is 2
	I1009 18:52:30.272389   47103 node_conditions.go:105] duration metric: took 13.689623ms to run NodePressure ...
	I1009 18:52:30.272437   47103 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:52:30.538089   47103 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1009 18:52:30.541619   47103 kubeadm.go:743] kubelet initialised
	I1009 18:52:30.541649   47103 kubeadm.go:744] duration metric: took 3.524133ms waiting for restarted kubelet to initialise ...
	I1009 18:52:30.541667   47103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:52:30.557168   47103 ops.go:34] apiserver oom_adj: -16
	I1009 18:52:30.557195   47103 kubeadm.go:601] duration metric: took 8.741106581s to restartPrimaryControlPlane
	I1009 18:52:30.557206   47103 kubeadm.go:402] duration metric: took 8.794537792s to StartCluster
	I1009 18:52:30.557226   47103 settings.go:142] acquiring lock: {Name:mke07af691f8cd3212916e5b2a1eaf75338ed4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:52:30.557297   47103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:52:30.557839   47103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/kubeconfig: {Name:mk1298c937114ca750ad76f4defd3e77cda49052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:52:30.558083   47103 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:52:30.558208   47103 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:52:30.558286   47103 addons.go:69] Setting storage-provisioner=true in profile "test-preload-591097"
	I1009 18:52:30.558299   47103 config.go:182] Loaded profile config "test-preload-591097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 18:52:30.558317   47103 addons.go:238] Setting addon storage-provisioner=true in "test-preload-591097"
	W1009 18:52:30.558333   47103 addons.go:247] addon storage-provisioner should already be in state true
	I1009 18:52:30.558363   47103 host.go:66] Checking if "test-preload-591097" exists ...
	I1009 18:52:30.558305   47103 addons.go:69] Setting default-storageclass=true in profile "test-preload-591097"
	I1009 18:52:30.558422   47103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-591097"
	I1009 18:52:30.558746   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:52:30.558775   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:52:30.558790   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:52:30.558834   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:52:30.559689   47103 out.go:179] * Verifying Kubernetes components...
	I1009 18:52:30.561034   47103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:52:30.573109   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I1009 18:52:30.573120   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I1009 18:52:30.573605   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:52:30.573667   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:52:30.574094   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:52:30.574111   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:52:30.574128   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:52:30.574142   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:52:30.574482   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:52:30.574517   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:52:30.574695   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetState
	I1009 18:52:30.574959   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:52:30.574987   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:52:30.577017   47103 kapi.go:59] client config for test-preload-591097: &rest.Config{Host:"https://192.168.39.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:52:30.577417   47103 addons.go:238] Setting addon default-storageclass=true in "test-preload-591097"
	W1009 18:52:30.577440   47103 addons.go:247] addon default-storageclass should already be in state true
	I1009 18:52:30.577469   47103 host.go:66] Checking if "test-preload-591097" exists ...
	I1009 18:52:30.577848   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:52:30.577897   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:52:30.589215   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I1009 18:52:30.589669   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:52:30.590129   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:52:30.590154   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:52:30.590530   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:52:30.590759   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetState
	I1009 18:52:30.592880   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:30.596159   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I1009 18:52:30.596522   47103 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:52:30.596636   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:52:30.597161   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:52:30.597196   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:52:30.597602   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:52:30.597931   47103 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:52:30.597947   47103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:52:30.597966   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:30.598113   47103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:52:30.598138   47103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:52:30.601726   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:30.602318   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:30.602349   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:30.602576   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:30.602760   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:30.602894   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:30.603066   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:30.612698   47103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I1009 18:52:30.613333   47103 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:52:30.613943   47103 main.go:141] libmachine: Using API Version  1
	I1009 18:52:30.613972   47103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:52:30.614343   47103 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:52:30.614573   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetState
	I1009 18:52:30.616221   47103 main.go:141] libmachine: (test-preload-591097) Calling .DriverName
	I1009 18:52:30.616432   47103 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:52:30.616445   47103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:52:30.616461   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHHostname
	I1009 18:52:30.620115   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:30.620639   47103 main.go:141] libmachine: (test-preload-591097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:81:c6", ip: ""} in network mk-test-preload-591097: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:11 +0000 UTC Type:0 Mac:52:54:00:cf:81:c6 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:test-preload-591097 Clientid:01:52:54:00:cf:81:c6}
	I1009 18:52:30.620665   47103 main.go:141] libmachine: (test-preload-591097) DBG | domain test-preload-591097 has defined IP address 192.168.39.4 and MAC address 52:54:00:cf:81:c6 in network mk-test-preload-591097
	I1009 18:52:30.620911   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHPort
	I1009 18:52:30.621127   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHKeyPath
	I1009 18:52:30.621298   47103 main.go:141] libmachine: (test-preload-591097) Calling .GetSSHUsername
	I1009 18:52:30.621436   47103 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/test-preload-591097/id_rsa Username:docker}
	I1009 18:52:30.807751   47103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:52:30.835421   47103 node_ready.go:35] waiting up to 6m0s for node "test-preload-591097" to be "Ready" ...
	I1009 18:52:30.970968   47103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:52:30.989952   47103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:52:31.643448   47103 main.go:141] libmachine: Making call to close driver server
	I1009 18:52:31.643475   47103 main.go:141] libmachine: (test-preload-591097) Calling .Close
	I1009 18:52:31.643517   47103 main.go:141] libmachine: Making call to close driver server
	I1009 18:52:31.643535   47103 main.go:141] libmachine: (test-preload-591097) Calling .Close
	I1009 18:52:31.643806   47103 main.go:141] libmachine: (test-preload-591097) DBG | Closing plugin on server side
	I1009 18:52:31.643843   47103 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:52:31.643851   47103 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:52:31.643867   47103 main.go:141] libmachine: (test-preload-591097) DBG | Closing plugin on server side
	I1009 18:52:31.643875   47103 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:52:31.643884   47103 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:52:31.643890   47103 main.go:141] libmachine: Making call to close driver server
	I1009 18:52:31.643897   47103 main.go:141] libmachine: (test-preload-591097) Calling .Close
	I1009 18:52:31.643870   47103 main.go:141] libmachine: Making call to close driver server
	I1009 18:52:31.643947   47103 main.go:141] libmachine: (test-preload-591097) Calling .Close
	I1009 18:52:31.644134   47103 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:52:31.644150   47103 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:52:31.644203   47103 main.go:141] libmachine: (test-preload-591097) DBG | Closing plugin on server side
	I1009 18:52:31.644225   47103 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:52:31.644232   47103 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:52:31.652396   47103 main.go:141] libmachine: Making call to close driver server
	I1009 18:52:31.652420   47103 main.go:141] libmachine: (test-preload-591097) Calling .Close
	I1009 18:52:31.652686   47103 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:52:31.652703   47103 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:52:31.652716   47103 main.go:141] libmachine: (test-preload-591097) DBG | Closing plugin on server side
	I1009 18:52:31.654631   47103 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 18:52:31.655936   47103 addons.go:514] duration metric: took 1.097732031s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1009 18:52:32.839529   47103 node_ready.go:57] node "test-preload-591097" has "Ready":"False" status (will retry)
	W1009 18:52:35.339219   47103 node_ready.go:57] node "test-preload-591097" has "Ready":"False" status (will retry)
	W1009 18:52:37.339822   47103 node_ready.go:57] node "test-preload-591097" has "Ready":"False" status (will retry)
	I1009 18:52:38.838769   47103 node_ready.go:49] node "test-preload-591097" is "Ready"
	I1009 18:52:38.838808   47103 node_ready.go:38] duration metric: took 8.003303228s for node "test-preload-591097" to be "Ready" ...
	I1009 18:52:38.838823   47103 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:52:38.838878   47103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:52:38.860348   47103 api_server.go:72] duration metric: took 8.302226321s to wait for apiserver process to appear ...
	I1009 18:52:38.860376   47103 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:52:38.860396   47103 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I1009 18:52:38.865763   47103 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I1009 18:52:38.866789   47103 api_server.go:141] control plane version: v1.32.0
	I1009 18:52:38.866814   47103 api_server.go:131] duration metric: took 6.429589ms to wait for apiserver health ...
	I1009 18:52:38.866824   47103 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:52:38.870268   47103 system_pods.go:59] 7 kube-system pods found
	I1009 18:52:38.870296   47103 system_pods.go:61] "coredns-668d6bf9bc-lvlbf" [3471d74c-151d-4404-b943-13fec6127c40] Running
	I1009 18:52:38.870304   47103 system_pods.go:61] "etcd-test-preload-591097" [de923427-5c5f-4d25-8b10-03b879745ec0] Running
	I1009 18:52:38.870320   47103 system_pods.go:61] "kube-apiserver-test-preload-591097" [91f1b85c-1bc6-41d1-98cc-292c4f187033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 18:52:38.870333   47103 system_pods.go:61] "kube-controller-manager-test-preload-591097" [dcc4f8d0-324e-417f-bd1f-6c6dca0325e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 18:52:38.870340   47103 system_pods.go:61] "kube-proxy-d4wr2" [3306837f-fe76-43df-a443-ea713cc73684] Running
	I1009 18:52:38.870351   47103 system_pods.go:61] "kube-scheduler-test-preload-591097" [31543ae7-91b6-48a9-b007-ce88f18cf4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 18:52:38.870356   47103 system_pods.go:61] "storage-provisioner" [54727b1d-7e52-461e-9ef1-e77d9071c2c1] Running
	I1009 18:52:38.870363   47103 system_pods.go:74] duration metric: took 3.533412ms to wait for pod list to return data ...
	I1009 18:52:38.870370   47103 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:52:38.872871   47103 default_sa.go:45] found service account: "default"
	I1009 18:52:38.872892   47103 default_sa.go:55] duration metric: took 2.51598ms for default service account to be created ...
	I1009 18:52:38.872899   47103 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:52:38.876624   47103 system_pods.go:86] 7 kube-system pods found
	I1009 18:52:38.876645   47103 system_pods.go:89] "coredns-668d6bf9bc-lvlbf" [3471d74c-151d-4404-b943-13fec6127c40] Running
	I1009 18:52:38.876651   47103 system_pods.go:89] "etcd-test-preload-591097" [de923427-5c5f-4d25-8b10-03b879745ec0] Running
	I1009 18:52:38.876662   47103 system_pods.go:89] "kube-apiserver-test-preload-591097" [91f1b85c-1bc6-41d1-98cc-292c4f187033] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 18:52:38.876669   47103 system_pods.go:89] "kube-controller-manager-test-preload-591097" [dcc4f8d0-324e-417f-bd1f-6c6dca0325e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 18:52:38.876675   47103 system_pods.go:89] "kube-proxy-d4wr2" [3306837f-fe76-43df-a443-ea713cc73684] Running
	I1009 18:52:38.876680   47103 system_pods.go:89] "kube-scheduler-test-preload-591097" [31543ae7-91b6-48a9-b007-ce88f18cf4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 18:52:38.876686   47103 system_pods.go:89] "storage-provisioner" [54727b1d-7e52-461e-9ef1-e77d9071c2c1] Running
	I1009 18:52:38.876692   47103 system_pods.go:126] duration metric: took 3.78872ms to wait for k8s-apps to be running ...
	I1009 18:52:38.876699   47103 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:52:38.876741   47103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:52:38.893238   47103 system_svc.go:56] duration metric: took 16.526396ms WaitForService to wait for kubelet
	I1009 18:52:38.893270   47103 kubeadm.go:586] duration metric: took 8.335154669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:52:38.893294   47103 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:52:38.896494   47103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 18:52:38.896516   47103 node_conditions.go:123] node cpu capacity is 2
	I1009 18:52:38.896529   47103 node_conditions.go:105] duration metric: took 3.229572ms to run NodePressure ...
	I1009 18:52:38.896540   47103 start.go:241] waiting for startup goroutines ...
	I1009 18:52:38.896546   47103 start.go:246] waiting for cluster config update ...
	I1009 18:52:38.896556   47103 start.go:255] writing updated cluster config ...
	I1009 18:52:38.896815   47103 ssh_runner.go:195] Run: rm -f paused
	I1009 18:52:38.902293   47103 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:52:38.902780   47103 kapi.go:59] client config for test-preload-591097: &rest.Config{Host:"https://192.168.39.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/profiles/test-preload-591097/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:52:38.906024   47103 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-lvlbf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:38.911294   47103 pod_ready.go:94] pod "coredns-668d6bf9bc-lvlbf" is "Ready"
	I1009 18:52:38.911315   47103 pod_ready.go:86] duration metric: took 5.263801ms for pod "coredns-668d6bf9bc-lvlbf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:38.913371   47103 pod_ready.go:83] waiting for pod "etcd-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:38.918476   47103 pod_ready.go:94] pod "etcd-test-preload-591097" is "Ready"
	I1009 18:52:38.918505   47103 pod_ready.go:86] duration metric: took 5.106002ms for pod "etcd-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:38.920703   47103 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 18:52:40.927068   47103 pod_ready.go:104] pod "kube-apiserver-test-preload-591097" is not "Ready", error: <nil>
	W1009 18:52:43.427467   47103 pod_ready.go:104] pod "kube-apiserver-test-preload-591097" is not "Ready", error: <nil>
	I1009 18:52:45.427334   47103 pod_ready.go:94] pod "kube-apiserver-test-preload-591097" is "Ready"
	I1009 18:52:45.427365   47103 pod_ready.go:86] duration metric: took 6.506641674s for pod "kube-apiserver-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.429464   47103 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.434329   47103 pod_ready.go:94] pod "kube-controller-manager-test-preload-591097" is "Ready"
	I1009 18:52:45.434360   47103 pod_ready.go:86] duration metric: took 4.874995ms for pod "kube-controller-manager-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.436891   47103 pod_ready.go:83] waiting for pod "kube-proxy-d4wr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.441723   47103 pod_ready.go:94] pod "kube-proxy-d4wr2" is "Ready"
	I1009 18:52:45.441772   47103 pod_ready.go:86] duration metric: took 4.851651ms for pod "kube-proxy-d4wr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.445717   47103 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.707496   47103 pod_ready.go:94] pod "kube-scheduler-test-preload-591097" is "Ready"
	I1009 18:52:45.707535   47103 pod_ready.go:86] duration metric: took 261.795786ms for pod "kube-scheduler-test-preload-591097" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:52:45.707552   47103 pod_ready.go:40] duration metric: took 6.805223839s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:52:45.752372   47103 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1009 18:52:45.753991   47103 out.go:203] 
	W1009 18:52:45.755399   47103 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1009 18:52:45.756475   47103 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1009 18:52:45.757945   47103 out.go:179] * Done! kubectl is now configured to use "test-preload-591097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.656265061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035966656239739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f60ed5ab-81e8-4997-b5c6-a44b72e1a67e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.656984011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c08b5631-e8cc-4a02-b4a1-b3e1cfae073e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.657144222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c08b5631-e8cc-4a02-b4a1-b3e1cfae073e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.657526421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ebe42262a2a0b778f433246bc607a698a412acb7b645fedeca7a03ab4e4c5e,PodSandboxId:87ee63e720ad93fa814b0c56144efb021fc4b64d0e33d591cc24e0800fa0d44b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760035956987960678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lvlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3471d74c-151d-4404-b943-13fec6127c40,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd7297bcef83d87d4524cf3d50ce2d52f5acc8b53fa69b7e40316118ebf6f6a,PodSandboxId:743fe1b509f786dd5404247ee4d96c92af7cceae6f3402fd5c60e13a4da6aff6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035949536524893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 54727b1d-7e52-461e-9ef1-e77d9071c2c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671f43211413fa0410497772c9c4cb08bf4353d77fa7ef34f2a3767394cd047a,PodSandboxId:cecd1ed5851bb1fdc0cc5ab7993e375bdf250e17d695aaab30cf37b736bc71b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760035949490878177,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4wr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
06837f-fe76-43df-a443-ea713cc73684,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d54d26bb76944d94521a4f18e79bdb76a441c536b7e47fdd1ed84b41abe5dfe,PodSandboxId:1a10f317b2ef62ae04ec21d39940b08ca6273f016e3f07d3f215fcd5943db2b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760035945341536170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379acedad
f257b65df14d2926c44d03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4891c2ef59420bcb6ea08219da1f94101f01e7947ae3ce66adebef2398b1166d,PodSandboxId:f1b4271d9411c7463195a31e013e3060ef07cb4daa9bbd6500bf03b582363ab8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760035945331029799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2514939dcafe91726ce9
5b0250f54227,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79264fc9309a29c27899a9781f0b2d56138460b027a94d0b93929cd6aed77241,PodSandboxId:d655e9f4035d6d556d1e291f14777318d74cffddd2786e9acaef0d82cf22151e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760035945309051611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0b1b22c3b717395471dcf67c89fd7f,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d53a0b28e81443f6adc0abc2f68fa9fe86b44baed067d2046e94d59223a2eb,PodSandboxId:358bef1538b3cd675b01e35f689ad950c97915369670cbc750bba09db904f5c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760035945250793636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f774dee07a1bbef0248ddbf1f7f2e0b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c08b5631-e8cc-4a02-b4a1-b3e1cfae073e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.696765461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3f241c6-8e3a-4d24-9fde-402c11b8e4df name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.697077845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3f241c6-8e3a-4d24-9fde-402c11b8e4df name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.698578436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=800182c8-8dce-4501-b191-b4850692077a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.699057108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035966699032244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=800182c8-8dce-4501-b191-b4850692077a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.699587159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0371927-a17c-47d5-9ad0-e8a74ef0738b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.699701162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0371927-a17c-47d5-9ad0-e8a74ef0738b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.699902892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ebe42262a2a0b778f433246bc607a698a412acb7b645fedeca7a03ab4e4c5e,PodSandboxId:87ee63e720ad93fa814b0c56144efb021fc4b64d0e33d591cc24e0800fa0d44b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760035956987960678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lvlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3471d74c-151d-4404-b943-13fec6127c40,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd7297bcef83d87d4524cf3d50ce2d52f5acc8b53fa69b7e40316118ebf6f6a,PodSandboxId:743fe1b509f786dd5404247ee4d96c92af7cceae6f3402fd5c60e13a4da6aff6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035949536524893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 54727b1d-7e52-461e-9ef1-e77d9071c2c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671f43211413fa0410497772c9c4cb08bf4353d77fa7ef34f2a3767394cd047a,PodSandboxId:cecd1ed5851bb1fdc0cc5ab7993e375bdf250e17d695aaab30cf37b736bc71b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760035949490878177,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4wr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
06837f-fe76-43df-a443-ea713cc73684,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d54d26bb76944d94521a4f18e79bdb76a441c536b7e47fdd1ed84b41abe5dfe,PodSandboxId:1a10f317b2ef62ae04ec21d39940b08ca6273f016e3f07d3f215fcd5943db2b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760035945341536170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379acedad
f257b65df14d2926c44d03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4891c2ef59420bcb6ea08219da1f94101f01e7947ae3ce66adebef2398b1166d,PodSandboxId:f1b4271d9411c7463195a31e013e3060ef07cb4daa9bbd6500bf03b582363ab8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760035945331029799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2514939dcafe91726ce9
5b0250f54227,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79264fc9309a29c27899a9781f0b2d56138460b027a94d0b93929cd6aed77241,PodSandboxId:d655e9f4035d6d556d1e291f14777318d74cffddd2786e9acaef0d82cf22151e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760035945309051611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0b1b22c3b717395471dcf67c89fd7f,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d53a0b28e81443f6adc0abc2f68fa9fe86b44baed067d2046e94d59223a2eb,PodSandboxId:358bef1538b3cd675b01e35f689ad950c97915369670cbc750bba09db904f5c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760035945250793636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f774dee07a1bbef0248ddbf1f7f2e0b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0371927-a17c-47d5-9ad0-e8a74ef0738b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.738416350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc153545-eb7e-4b13-a40f-205e0a9dc35d name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.738660187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc153545-eb7e-4b13-a40f-205e0a9dc35d name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.739909204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d31745d6-edd9-406f-83ed-8d68cd5ff543 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.740552721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035966740526577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d31745d6-edd9-406f-83ed-8d68cd5ff543 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.741374502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7b268c7-c0d3-4a92-b780-f516fb94f357 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.741471450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7b268c7-c0d3-4a92-b780-f516fb94f357 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.741721529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ebe42262a2a0b778f433246bc607a698a412acb7b645fedeca7a03ab4e4c5e,PodSandboxId:87ee63e720ad93fa814b0c56144efb021fc4b64d0e33d591cc24e0800fa0d44b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760035956987960678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lvlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3471d74c-151d-4404-b943-13fec6127c40,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd7297bcef83d87d4524cf3d50ce2d52f5acc8b53fa69b7e40316118ebf6f6a,PodSandboxId:743fe1b509f786dd5404247ee4d96c92af7cceae6f3402fd5c60e13a4da6aff6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035949536524893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 54727b1d-7e52-461e-9ef1-e77d9071c2c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671f43211413fa0410497772c9c4cb08bf4353d77fa7ef34f2a3767394cd047a,PodSandboxId:cecd1ed5851bb1fdc0cc5ab7993e375bdf250e17d695aaab30cf37b736bc71b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760035949490878177,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4wr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
06837f-fe76-43df-a443-ea713cc73684,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d54d26bb76944d94521a4f18e79bdb76a441c536b7e47fdd1ed84b41abe5dfe,PodSandboxId:1a10f317b2ef62ae04ec21d39940b08ca6273f016e3f07d3f215fcd5943db2b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760035945341536170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379acedad
f257b65df14d2926c44d03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4891c2ef59420bcb6ea08219da1f94101f01e7947ae3ce66adebef2398b1166d,PodSandboxId:f1b4271d9411c7463195a31e013e3060ef07cb4daa9bbd6500bf03b582363ab8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760035945331029799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2514939dcafe91726ce9
5b0250f54227,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79264fc9309a29c27899a9781f0b2d56138460b027a94d0b93929cd6aed77241,PodSandboxId:d655e9f4035d6d556d1e291f14777318d74cffddd2786e9acaef0d82cf22151e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760035945309051611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0b1b22c3b717395471dcf67c89fd7f,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d53a0b28e81443f6adc0abc2f68fa9fe86b44baed067d2046e94d59223a2eb,PodSandboxId:358bef1538b3cd675b01e35f689ad950c97915369670cbc750bba09db904f5c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760035945250793636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f774dee07a1bbef0248ddbf1f7f2e0b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7b268c7-c0d3-4a92-b780-f516fb94f357 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.775991320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cda005d-09f5-407b-8dc3-0db8cd7fb790 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.776090234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cda005d-09f5-407b-8dc3-0db8cd7fb790 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.777574060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a9122fc-f39b-4fb1-a863-93c36fae8b54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.778057938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035966778034753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a9122fc-f39b-4fb1-a863-93c36fae8b54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.779006016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8200307-b0e7-40f3-844e-0a5d77f8f7d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.779152030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8200307-b0e7-40f3-844e-0a5d77f8f7d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:52:46 test-preload-591097 crio[833]: time="2025-10-09 18:52:46.779478208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1ebe42262a2a0b778f433246bc607a698a412acb7b645fedeca7a03ab4e4c5e,PodSandboxId:87ee63e720ad93fa814b0c56144efb021fc4b64d0e33d591cc24e0800fa0d44b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760035956987960678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lvlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3471d74c-151d-4404-b943-13fec6127c40,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd7297bcef83d87d4524cf3d50ce2d52f5acc8b53fa69b7e40316118ebf6f6a,PodSandboxId:743fe1b509f786dd5404247ee4d96c92af7cceae6f3402fd5c60e13a4da6aff6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035949536524893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 54727b1d-7e52-461e-9ef1-e77d9071c2c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671f43211413fa0410497772c9c4cb08bf4353d77fa7ef34f2a3767394cd047a,PodSandboxId:cecd1ed5851bb1fdc0cc5ab7993e375bdf250e17d695aaab30cf37b736bc71b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760035949490878177,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4wr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33
06837f-fe76-43df-a443-ea713cc73684,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d54d26bb76944d94521a4f18e79bdb76a441c536b7e47fdd1ed84b41abe5dfe,PodSandboxId:1a10f317b2ef62ae04ec21d39940b08ca6273f016e3f07d3f215fcd5943db2b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760035945341536170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379acedad
f257b65df14d2926c44d03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4891c2ef59420bcb6ea08219da1f94101f01e7947ae3ce66adebef2398b1166d,PodSandboxId:f1b4271d9411c7463195a31e013e3060ef07cb4daa9bbd6500bf03b582363ab8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760035945331029799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2514939dcafe91726ce9
5b0250f54227,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79264fc9309a29c27899a9781f0b2d56138460b027a94d0b93929cd6aed77241,PodSandboxId:d655e9f4035d6d556d1e291f14777318d74cffddd2786e9acaef0d82cf22151e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760035945309051611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0b1b22c3b717395471dcf67c89fd7f,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4d53a0b28e81443f6adc0abc2f68fa9fe86b44baed067d2046e94d59223a2eb,PodSandboxId:358bef1538b3cd675b01e35f689ad950c97915369670cbc750bba09db904f5c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760035945250793636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-591097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f774dee07a1bbef0248ddbf1f7f2e0b4,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8200307-b0e7-40f3-844e-0a5d77f8f7d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1ebe42262a2a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   1                   87ee63e720ad9       coredns-668d6bf9bc-lvlbf
	efd7297bcef83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   743fe1b509f78       storage-provisioner
	671f43211413f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   cecd1ed5851bb       kube-proxy-d4wr2
	7d54d26bb7694       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   1a10f317b2ef6       kube-scheduler-test-preload-591097
	4891c2ef59420       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   f1b4271d9411c       kube-apiserver-test-preload-591097
	79264fc9309a2       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   d655e9f4035d6       etcd-test-preload-591097
	c4d53a0b28e81       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   358bef1538b3c       kube-controller-manager-test-preload-591097
	
	
	==> coredns [a1ebe42262a2a0b778f433246bc607a698a412acb7b645fedeca7a03ab4e4c5e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59742 - 48 "HINFO IN 1659356800032819045.1646984575453383426. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023548146s
	
	
	==> describe nodes <==
	Name:               test-preload-591097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-591097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=test-preload-591097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T18_50_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 18:50:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-591097
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:52:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:52:38 +0000   Thu, 09 Oct 2025 18:50:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:52:38 +0000   Thu, 09 Oct 2025 18:50:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:52:38 +0000   Thu, 09 Oct 2025 18:50:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:52:38 +0000   Thu, 09 Oct 2025 18:52:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    test-preload-591097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8464be8a9e243b9981a191e3b63125a
	  System UUID:                c8464be8-a9e2-43b9-981a-191e3b63125a
	  Boot ID:                    131d29a7-7caa-4bb8-a7d9-03cd952b35e3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-lvlbf                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-591097                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-test-preload-591097             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-591097    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-d4wr2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-test-preload-591097             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 102s               kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  109s               kubelet          Node test-preload-591097 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    109s               kubelet          Node test-preload-591097 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s               kubelet          Node test-preload-591097 status is now: NodeHasSufficientPID
	  Normal   Starting                 109s               kubelet          Starting kubelet.
	  Normal   NodeReady                108s               kubelet          Node test-preload-591097 status is now: NodeReady
	  Normal   RegisteredNode           105s               node-controller  Node test-preload-591097 event: Registered Node test-preload-591097 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-591097 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-591097 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-591097 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-591097 has been rebooted, boot id: 131d29a7-7caa-4bb8-a7d9-03cd952b35e3
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-591097 event: Registered Node test-preload-591097 in Controller
	
	
	==> dmesg <==
	[Oct 9 18:52] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001635] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.075653] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083145] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.097637] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.440747] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.028230] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [79264fc9309a29c27899a9781f0b2d56138460b027a94d0b93929cd6aed77241] <==
	{"level":"info","ts":"2025-10-09T18:52:25.693366Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6b117bdc86acb526","local-member-id":"7ab0973fa604e492","added-peer-id":"7ab0973fa604e492","added-peer-peer-urls":["https://192.168.39.4:2380"]}
	{"level":"info","ts":"2025-10-09T18:52:25.693463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b117bdc86acb526","local-member-id":"7ab0973fa604e492","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T18:52:25.693500Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T18:52:25.696383Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-09T18:52:25.714974Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T18:52:25.715307Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7ab0973fa604e492","initial-advertise-peer-urls":["https://192.168.39.4:2380"],"listen-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T18:52:25.715335Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T18:52:25.716114Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2025-10-09T18:52:25.716130Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2025-10-09T18:52:27.447027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T18:52:27.447137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T18:52:27.447182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgPreVoteResp from 7ab0973fa604e492 at term 2"}
	{"level":"info","ts":"2025-10-09T18:52:27.447671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T18:52:27.447718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 received MsgVoteResp from 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2025-10-09T18:52:27.447739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 became leader at term 3"}
	{"level":"info","ts":"2025-10-09T18:52:27.447758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7ab0973fa604e492 elected leader 7ab0973fa604e492 at term 3"}
	{"level":"info","ts":"2025-10-09T18:52:27.452513Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7ab0973fa604e492","local-member-attributes":"{Name:test-preload-591097 ClientURLs:[https://192.168.39.4:2379]}","request-path":"/0/members/7ab0973fa604e492/attributes","cluster-id":"6b117bdc86acb526","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T18:52:27.452758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T18:52:27.452820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T18:52:27.453803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T18:52:27.455779Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-09T18:52:27.455451Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-09T18:52:27.456340Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T18:52:27.458684Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-09T18:52:27.459189Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.4:2379"}
	
	
	==> kernel <==
	 18:52:47 up 0 min,  0 users,  load average: 0.72, 0.20, 0.07
	Linux test-preload-591097 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4891c2ef59420bcb6ea08219da1f94101f01e7947ae3ce66adebef2398b1166d] <==
	I1009 18:52:28.678173       1 policy_source.go:240] refreshing policies
	I1009 18:52:28.699432       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 18:52:28.702463       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1009 18:52:28.702547       1 aggregator.go:171] initial CRD sync complete...
	I1009 18:52:28.702555       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 18:52:28.702560       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 18:52:28.702565       1 cache.go:39] Caches are synced for autoregister controller
	I1009 18:52:28.744986       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1009 18:52:28.761330       1 shared_informer.go:320] Caches are synced for configmaps
	I1009 18:52:28.761488       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1009 18:52:28.761538       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 18:52:28.761544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 18:52:28.762911       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 18:52:28.763055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 18:52:28.763599       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 18:52:28.783242       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 18:52:29.173032       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 18:52:29.575790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 18:52:30.369248       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 18:52:30.406572       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 18:52:30.446749       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 18:52:30.454817       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 18:52:32.229361       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 18:52:32.281317       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 18:52:32.330185       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c4d53a0b28e81443f6adc0abc2f68fa9fe86b44baed067d2046e94d59223a2eb] <==
	I1009 18:52:31.926506       1 shared_informer.go:320] Caches are synced for daemon sets
	I1009 18:52:31.926550       1 shared_informer.go:320] Caches are synced for TTL
	I1009 18:52:31.926699       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1009 18:52:31.927875       1 shared_informer.go:320] Caches are synced for PVC protection
	I1009 18:52:31.928834       1 shared_informer.go:320] Caches are synced for stateful set
	I1009 18:52:31.930225       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1009 18:52:31.932806       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1009 18:52:31.933316       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 18:52:31.940547       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 18:52:31.958905       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 18:52:31.964118       1 shared_informer.go:320] Caches are synced for node
	I1009 18:52:31.964366       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 18:52:31.964451       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 18:52:31.964471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1009 18:52:31.964489       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1009 18:52:31.964660       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-591097"
	I1009 18:52:31.967449       1 shared_informer.go:320] Caches are synced for attach detach
	I1009 18:52:32.288594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="358.268167ms"
	I1009 18:52:32.288932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.939µs"
	I1009 18:52:37.274454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="61.226µs"
	I1009 18:52:37.305915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.945864ms"
	I1009 18:52:37.306131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="170.896µs"
	I1009 18:52:38.806264       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-591097"
	I1009 18:52:38.819256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-591097"
	I1009 18:52:41.881730       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [671f43211413fa0410497772c9c4cb08bf4353d77fa7ef34f2a3767394cd047a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 18:52:29.867968       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 18:52:29.879357       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E1009 18:52:29.879829       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:52:29.919311       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1009 18:52:29.919393       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:52:29.919418       1 server_linux.go:170] "Using iptables Proxier"
	I1009 18:52:29.922092       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:52:29.922405       1 server.go:497] "Version info" version="v1.32.0"
	I1009 18:52:29.922440       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:52:29.924028       1 config.go:199] "Starting service config controller"
	I1009 18:52:29.924071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:52:29.924100       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:52:29.924104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:52:29.927387       1 config.go:329] "Starting node config controller"
	I1009 18:52:29.927417       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:52:30.024370       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 18:52:30.024407       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:52:30.027869       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d54d26bb76944d94521a4f18e79bdb76a441c536b7e47fdd1ed84b41abe5dfe] <==
	I1009 18:52:26.790953       1 serving.go:386] Generated self-signed cert in-memory
	W1009 18:52:28.612715       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 18:52:28.612750       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 18:52:28.612759       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 18:52:28.612769       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 18:52:28.669531       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1009 18:52:28.671740       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:52:28.678973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:52:28.679020       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 18:52:28.680827       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 18:52:28.680856       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 18:52:28.779844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: I1009 18:52:28.786510    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-591097"
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: E1009 18:52:28.799122    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-591097\" already exists" pod="kube-system/kube-controller-manager-test-preload-591097"
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: I1009 18:52:28.799147    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-591097"
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: E1009 18:52:28.808164    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-591097\" already exists" pod="kube-system/kube-scheduler-test-preload-591097"
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: I1009 18:52:28.808208    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-591097"
	Oct 09 18:52:28 test-preload-591097 kubelet[1164]: E1009 18:52:28.820726    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-591097\" already exists" pod="kube-system/etcd-test-preload-591097"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: I1009 18:52:29.069386    1164 apiserver.go:52] "Watching apiserver"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: E1009 18:52:29.075551    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lvlbf" podUID="3471d74c-151d-4404-b943-13fec6127c40"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: I1009 18:52:29.090284    1164 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: I1009 18:52:29.165341    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/54727b1d-7e52-461e-9ef1-e77d9071c2c1-tmp\") pod \"storage-provisioner\" (UID: \"54727b1d-7e52-461e-9ef1-e77d9071c2c1\") " pod="kube-system/storage-provisioner"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: I1009 18:52:29.165405    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3306837f-fe76-43df-a443-ea713cc73684-lib-modules\") pod \"kube-proxy-d4wr2\" (UID: \"3306837f-fe76-43df-a443-ea713cc73684\") " pod="kube-system/kube-proxy-d4wr2"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: I1009 18:52:29.165425    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3306837f-fe76-43df-a443-ea713cc73684-xtables-lock\") pod \"kube-proxy-d4wr2\" (UID: \"3306837f-fe76-43df-a443-ea713cc73684\") " pod="kube-system/kube-proxy-d4wr2"
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: E1009 18:52:29.166335    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: E1009 18:52:29.166450    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume podName:3471d74c-151d-4404-b943-13fec6127c40 nodeName:}" failed. No retries permitted until 2025-10-09 18:52:29.666397073 +0000 UTC m=+6.686009869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume") pod "coredns-668d6bf9bc-lvlbf" (UID: "3471d74c-151d-4404-b943-13fec6127c40") : object "kube-system"/"coredns" not registered
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: E1009 18:52:29.669400    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 18:52:29 test-preload-591097 kubelet[1164]: E1009 18:52:29.669512    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume podName:3471d74c-151d-4404-b943-13fec6127c40 nodeName:}" failed. No retries permitted until 2025-10-09 18:52:30.669476012 +0000 UTC m=+7.689088809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume") pod "coredns-668d6bf9bc-lvlbf" (UID: "3471d74c-151d-4404-b943-13fec6127c40") : object "kube-system"/"coredns" not registered
	Oct 09 18:52:30 test-preload-591097 kubelet[1164]: E1009 18:52:30.677107    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 18:52:30 test-preload-591097 kubelet[1164]: E1009 18:52:30.677169    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume podName:3471d74c-151d-4404-b943-13fec6127c40 nodeName:}" failed. No retries permitted until 2025-10-09 18:52:32.677156544 +0000 UTC m=+9.696769351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume") pod "coredns-668d6bf9bc-lvlbf" (UID: "3471d74c-151d-4404-b943-13fec6127c40") : object "kube-system"/"coredns" not registered
	Oct 09 18:52:31 test-preload-591097 kubelet[1164]: E1009 18:52:31.153919    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lvlbf" podUID="3471d74c-151d-4404-b943-13fec6127c40"
	Oct 09 18:52:32 test-preload-591097 kubelet[1164]: E1009 18:52:32.690771    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 18:52:32 test-preload-591097 kubelet[1164]: E1009 18:52:32.690869    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume podName:3471d74c-151d-4404-b943-13fec6127c40 nodeName:}" failed. No retries permitted until 2025-10-09 18:52:36.690850667 +0000 UTC m=+13.710463464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3471d74c-151d-4404-b943-13fec6127c40-config-volume") pod "coredns-668d6bf9bc-lvlbf" (UID: "3471d74c-151d-4404-b943-13fec6127c40") : object "kube-system"/"coredns" not registered
	Oct 09 18:52:33 test-preload-591097 kubelet[1164]: E1009 18:52:33.149004    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035953148514485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:52:33 test-preload-591097 kubelet[1164]: E1009 18:52:33.149045    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035953148514485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:52:43 test-preload-591097 kubelet[1164]: E1009 18:52:43.150175    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035963149904668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:52:43 test-preload-591097 kubelet[1164]: E1009 18:52:43.150216    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035963149904668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [efd7297bcef83d87d4524cf3d50ce2d52f5acc8b53fa69b7e40316118ebf6f6a] <==
	I1009 18:52:29.776154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-591097 -n test-preload-591097
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-591097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-591097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-591097
--- FAIL: TestPreload (162.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (74.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.556418742s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-706613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-706613" primary control-plane node in "pause-706613" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-706613" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:57:55.231507   53754 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:57:55.231818   53754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:57:55.231829   53754 out.go:374] Setting ErrFile to fd 2...
	I1009 18:57:55.231837   53754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:57:55.232105   53754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:57:55.232566   53754 out.go:368] Setting JSON to false
	I1009 18:57:55.233462   53754 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6015,"bootTime":1760030260,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:57:55.233553   53754 start.go:141] virtualization: kvm guest
	I1009 18:57:55.235668   53754 out.go:179] * [pause-706613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:57:55.237198   53754 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:57:55.237202   53754 notify.go:220] Checking for updates...
	I1009 18:57:55.238602   53754 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:57:55.240217   53754 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:57:55.241641   53754 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:57:55.243110   53754 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:57:55.244466   53754 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:57:55.246739   53754 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:57:55.247436   53754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:57:55.247542   53754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:57:55.264190   53754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I1009 18:57:55.264893   53754 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:57:55.265623   53754 main.go:141] libmachine: Using API Version  1
	I1009 18:57:55.265653   53754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:57:55.266072   53754 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:57:55.266302   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:57:55.266610   53754 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:57:55.267011   53754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:57:55.267142   53754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:57:55.281930   53754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I1009 18:57:55.282386   53754 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:57:55.282964   53754 main.go:141] libmachine: Using API Version  1
	I1009 18:57:55.283001   53754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:57:55.283430   53754 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:57:55.283654   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:57:55.322758   53754 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 18:57:55.324361   53754 start.go:305] selected driver: kvm2
	I1009 18:57:55.324383   53754 start.go:925] validating driver "kvm2" against &{Name:pause-706613 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-706613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:57:55.324541   53754 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:57:55.324904   53754 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:57:55.324987   53754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:57:55.341429   53754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:57:55.341461   53754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:57:55.357441   53754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:57:55.358548   53754 cni.go:84] Creating CNI manager for ""
	I1009 18:57:55.358646   53754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:57:55.358766   53754 start.go:349] cluster config:
	{Name:pause-706613 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-706613 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:57:55.358985   53754 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:57:55.360879   53754 out.go:179] * Starting "pause-706613" primary control-plane node in "pause-706613" cluster
	I1009 18:57:55.362179   53754 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:57:55.362225   53754 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:57:55.362244   53754 cache.go:64] Caching tarball of preloaded images
	I1009 18:57:55.362390   53754 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:57:55.362409   53754 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:57:55.362595   53754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/config.json ...
	I1009 18:57:55.362934   53754 start.go:360] acquireMachinesLock for pause-706613: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:58:12.866624   53754 start.go:364] duration metric: took 17.503634172s to acquireMachinesLock for "pause-706613"
	I1009 18:58:12.866674   53754 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:58:12.866686   53754 fix.go:54] fixHost starting: 
	I1009 18:58:12.867163   53754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:12.867221   53754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:12.885767   53754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I1009 18:58:12.886367   53754 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:12.886943   53754 main.go:141] libmachine: Using API Version  1
	I1009 18:58:12.886968   53754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:12.887369   53754 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:12.887560   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:12.887730   53754 main.go:141] libmachine: (pause-706613) Calling .GetState
	I1009 18:58:12.890894   53754 fix.go:112] recreateIfNeeded on pause-706613: state=Running err=<nil>
	W1009 18:58:12.890942   53754 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:58:12.995122   53754 out.go:252] * Updating the running kvm2 "pause-706613" VM ...
	I1009 18:58:12.995194   53754 machine.go:93] provisionDockerMachine start ...
	I1009 18:58:12.995218   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:12.995573   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:12.999549   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.000204   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.000234   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.000478   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:13.000678   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.000855   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.000986   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:13.001170   53754 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:13.001394   53754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1009 18:58:13.001408   53754 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:58:13.113488   53754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-706613
	
	I1009 18:58:13.113517   53754 main.go:141] libmachine: (pause-706613) Calling .GetMachineName
	I1009 18:58:13.113793   53754 buildroot.go:166] provisioning hostname "pause-706613"
	I1009 18:58:13.113821   53754 main.go:141] libmachine: (pause-706613) Calling .GetMachineName
	I1009 18:58:13.114063   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:13.117250   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.117717   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.117746   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.117930   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:13.118153   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.118340   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.118487   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:13.118659   53754 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:13.118886   53754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1009 18:58:13.118897   53754 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-706613 && echo "pause-706613" | sudo tee /etc/hostname
	I1009 18:58:13.251182   53754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-706613
	
	I1009 18:58:13.251212   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:13.254531   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.254928   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.254963   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.255165   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:13.255383   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.255532   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.255697   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:13.255853   53754 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:13.256082   53754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1009 18:58:13.256097   53754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-706613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-706613/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-706613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:58:13.370853   53754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:13.370884   53754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:58:13.370934   53754 buildroot.go:174] setting up certificates
	I1009 18:58:13.370946   53754 provision.go:84] configureAuth start
	I1009 18:58:13.370960   53754 main.go:141] libmachine: (pause-706613) Calling .GetMachineName
	I1009 18:58:13.371298   53754 main.go:141] libmachine: (pause-706613) Calling .GetIP
	I1009 18:58:13.374651   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.375075   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.375111   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.375279   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:13.378133   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.378574   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.378613   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.378785   53754 provision.go:143] copyHostCerts
	I1009 18:58:13.378849   53754 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:58:13.378867   53754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:13.378935   53754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:58:13.379080   53754 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:58:13.379092   53754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:13.379121   53754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:58:13.379200   53754 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:58:13.379207   53754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:13.379227   53754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:58:13.379289   53754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.pause-706613 san=[127.0.0.1 192.168.39.189 localhost minikube pause-706613]
	I1009 18:58:13.483385   53754 provision.go:177] copyRemoteCerts
	I1009 18:58:13.483447   53754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:58:13.483471   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:13.486631   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.487087   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.487121   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.487383   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:13.487605   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.487798   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:13.487966   53754 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/pause-706613/id_rsa Username:docker}
	I1009 18:58:13.581275   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:58:13.616282   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:58:13.652847   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:58:13.691257   53754 provision.go:87] duration metric: took 320.295857ms to configureAuth
	I1009 18:58:13.691287   53754 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:58:13.691486   53754 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:13.691568   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:13.694685   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.695098   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:13.695141   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:13.695362   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:13.695576   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.695765   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:13.695899   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:13.696114   53754 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:13.696332   53754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1009 18:58:13.696362   53754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:58:19.284499   53754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:58:19.284531   53754 machine.go:96] duration metric: took 6.289329087s to provisionDockerMachine
	I1009 18:58:19.284547   53754 start.go:293] postStartSetup for "pause-706613" (driver="kvm2")
	I1009 18:58:19.284559   53754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:58:19.284583   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:19.285024   53754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:58:19.285086   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:19.288429   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.289020   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:19.289064   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.289298   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:19.289510   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:19.289697   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:19.289915   53754 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/pause-706613/id_rsa Username:docker}
	I1009 18:58:19.377778   53754 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:58:19.383264   53754 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:58:19.383300   53754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:58:19.383369   53754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:58:19.383462   53754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:58:19.383614   53754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:58:19.397136   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:19.429583   53754 start.go:296] duration metric: took 145.020841ms for postStartSetup
	I1009 18:58:19.429629   53754 fix.go:56] duration metric: took 6.562944026s for fixHost
	I1009 18:58:19.429654   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:19.432567   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.433023   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:19.433078   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.433297   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:19.433531   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:19.433745   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:19.433930   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:19.434118   53754 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:19.434339   53754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1009 18:58:19.434352   53754 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:58:19.543541   53754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760036299.539937185
	
	I1009 18:58:19.543578   53754 fix.go:216] guest clock: 1760036299.539937185
	I1009 18:58:19.543588   53754 fix.go:229] Guest: 2025-10-09 18:58:19.539937185 +0000 UTC Remote: 2025-10-09 18:58:19.429633968 +0000 UTC m=+24.251128953 (delta=110.303217ms)
	I1009 18:58:19.543615   53754 fix.go:200] guest clock delta is within tolerance: 110.303217ms
	I1009 18:58:19.543622   53754 start.go:83] releasing machines lock for "pause-706613", held for 6.676969646s
	I1009 18:58:19.543653   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:19.543943   53754 main.go:141] libmachine: (pause-706613) Calling .GetIP
	I1009 18:58:19.547531   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.548106   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:19.548149   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.548237   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:19.548758   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:19.548940   53754 main.go:141] libmachine: (pause-706613) Calling .DriverName
	I1009 18:58:19.549049   53754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:58:19.549106   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:19.549163   53754 ssh_runner.go:195] Run: cat /version.json
	I1009 18:58:19.549189   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHHostname
	I1009 18:58:19.553068   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.553170   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.553576   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:19.553595   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.553620   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:19.553631   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:19.553862   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:19.553871   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHPort
	I1009 18:58:19.554119   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:19.554139   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHKeyPath
	I1009 18:58:19.554303   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:19.554396   53754 main.go:141] libmachine: (pause-706613) Calling .GetSSHUsername
	I1009 18:58:19.554475   53754 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/pause-706613/id_rsa Username:docker}
	I1009 18:58:19.554514   53754 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/pause-706613/id_rsa Username:docker}
	I1009 18:58:19.636198   53754 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:19.673318   53754 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:58:19.831277   53754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:58:19.839079   53754 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:58:19.839162   53754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:58:19.852261   53754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:58:19.852299   53754 start.go:495] detecting cgroup driver to use...
	I1009 18:58:19.852392   53754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:58:19.880754   53754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:58:19.909559   53754 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:58:19.909638   53754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:58:19.932642   53754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:58:19.949305   53754 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:58:20.131547   53754 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:58:20.316534   53754 docker.go:234] disabling docker service ...
	I1009 18:58:20.316607   53754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:58:20.354557   53754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:58:20.376134   53754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:58:20.588282   53754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:58:20.793473   53754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:58:20.810986   53754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:58:20.836255   53754 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:58:20.836320   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.851818   53754 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:58:20.851903   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.866126   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.883107   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.899965   53754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:58:20.920274   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.938360   53754 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.953531   53754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:20.971287   53754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:58:20.988905   53754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:58:21.005728   53754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:21.198846   53754 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:58:21.489708   53754 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:21.489790   53754 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:21.496023   53754 start.go:563] Will wait 60s for crictl version
	I1009 18:58:21.496148   53754 ssh_runner.go:195] Run: which crictl
	I1009 18:58:21.502008   53754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:21.549237   53754 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:21.549328   53754 ssh_runner.go:195] Run: crio --version
	I1009 18:58:21.587188   53754 ssh_runner.go:195] Run: crio --version
	I1009 18:58:21.633200   53754 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:58:21.634823   53754 main.go:141] libmachine: (pause-706613) Calling .GetIP
	I1009 18:58:21.639233   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:21.639814   53754 main.go:141] libmachine: (pause-706613) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:3d:16", ip: ""} in network mk-pause-706613: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:46 +0000 UTC Type:0 Mac:52:54:00:94:3d:16 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:pause-706613 Clientid:01:52:54:00:94:3d:16}
	I1009 18:58:21.639837   53754 main.go:141] libmachine: (pause-706613) DBG | domain pause-706613 has defined IP address 192.168.39.189 and MAC address 52:54:00:94:3d:16 in network mk-pause-706613
	I1009 18:58:21.640095   53754 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 18:58:21.646017   53754 kubeadm.go:883] updating cluster {Name:pause-706613 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-706613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:58:21.646264   53754 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:21.646338   53754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:21.694532   53754 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:21.694561   53754 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:58:21.694636   53754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:21.737317   53754 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:21.737344   53754 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:58:21.737355   53754 kubeadm.go:934] updating node { 192.168.39.189 8443 v1.34.1 crio true true} ...
	I1009 18:58:21.737472   53754 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-706613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-706613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:58:21.737549   53754 ssh_runner.go:195] Run: crio config
	I1009 18:58:21.793091   53754 cni.go:84] Creating CNI manager for ""
	I1009 18:58:21.793119   53754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:21.793135   53754 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:58:21.793179   53754 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-706613 NodeName:pause-706613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:58:21.793322   53754 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-706613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:58:21.793418   53754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:58:21.809338   53754 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:58:21.809405   53754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:58:21.828348   53754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 18:58:21.855181   53754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:58:21.885561   53754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 18:58:21.916200   53754 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1009 18:58:21.921062   53754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:22.137577   53754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:58:22.159261   53754 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613 for IP: 192.168.39.189
	I1009 18:58:22.159288   53754 certs.go:195] generating shared ca certs ...
	I1009 18:58:22.159309   53754 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:22.159505   53754 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:58:22.159559   53754 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:58:22.159573   53754 certs.go:257] generating profile certs ...
	I1009 18:58:22.159696   53754 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/client.key
	I1009 18:58:22.159771   53754 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/apiserver.key.6e5a0455
	I1009 18:58:22.159851   53754 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/proxy-client.key
	I1009 18:58:22.160003   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:58:22.160060   53754 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:58:22.160073   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:58:22.160109   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:58:22.160148   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:58:22.160179   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:58:22.160235   53754 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:22.161114   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:58:22.206444   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:58:22.241435   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:58:22.272997   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:58:22.401674   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:58:22.471998   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:58:22.560688   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:58:22.694361   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/pause-706613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:58:22.770231   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:58:22.871446   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:58:22.945962   53754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:58:23.027414   53754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:58:23.079640   53754 ssh_runner.go:195] Run: openssl version
	I1009 18:58:23.100916   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152632.pem && ln -fs /usr/share/ca-certificates/152632.pem /etc/ssl/certs/152632.pem"
	I1009 18:58:23.131008   53754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152632.pem
	I1009 18:58:23.144988   53754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:07 /usr/share/ca-certificates/152632.pem
	I1009 18:58:23.145109   53754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152632.pem
	I1009 18:58:23.161423   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152632.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:58:23.203059   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:58:23.260030   53754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:23.281603   53754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:23.281679   53754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:23.316011   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:58:23.399907   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15263.pem && ln -fs /usr/share/ca-certificates/15263.pem /etc/ssl/certs/15263.pem"
	I1009 18:58:23.453130   53754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15263.pem
	I1009 18:58:23.484518   53754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:07 /usr/share/ca-certificates/15263.pem
	I1009 18:58:23.484621   53754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15263.pem
	I1009 18:58:23.513380   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15263.pem /etc/ssl/certs/51391683.0"
	I1009 18:58:23.575702   53754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:58:23.597400   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:58:23.625373   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:58:23.646938   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:58:23.680090   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:58:23.721893   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:58:23.743982   53754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:58:23.763175   53754 kubeadm.go:400] StartCluster: {Name:pause-706613 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-706613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:23.763281   53754 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:58:23.763342   53754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:58:23.852514   53754 cri.go:89] found id: "62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80"
	I1009 18:58:23.852543   53754 cri.go:89] found id: "3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7"
	I1009 18:58:23.852549   53754 cri.go:89] found id: "6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b"
	I1009 18:58:23.852554   53754 cri.go:89] found id: "e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279"
	I1009 18:58:23.852558   53754 cri.go:89] found id: "b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963"
	I1009 18:58:23.852563   53754 cri.go:89] found id: "d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0"
	I1009 18:58:23.852566   53754 cri.go:89] found id: "a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091"
	I1009 18:58:23.852570   53754 cri.go:89] found id: "72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63"
	I1009 18:58:23.852582   53754 cri.go:89] found id: "49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500"
	I1009 18:58:23.852592   53754 cri.go:89] found id: "72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d"
	I1009 18:58:23.852596   53754 cri.go:89] found id: ""
	I1009 18:58:23.852647   53754 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-706613 -n pause-706613
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-706613 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-706613 logs -n 25: (1.699780762s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-852620    │ jenkins │ v1.32.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ -p kubernetes-upgrade-667994                                                                                                                                       │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ stopped-upgrade-644281 stop                                                                                                                                        │ stopped-upgrade-644281    │ jenkins │ v1.32.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p offline-crio-636274                                                                                                                                             │ offline-crio-636274       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-644281 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ delete  │ -p stopped-upgrade-644281                                                                                                                                          │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-852620 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p running-upgrade-852620                                                                                                                                          │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:59 UTC │
	│ delete  │ -p NoKubernetes-156430                                                                                                                                             │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ ssh     │ force-systemd-flag-026602 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ delete  │ -p force-systemd-flag-026602                                                                                                                                       │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-866940  │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	│ ssh     │ -p NoKubernetes-156430 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:58:29.883638   54372 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:58:29.883879   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.883887   54372 out.go:374] Setting ErrFile to fd 2...
	I1009 18:58:29.883891   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.884100   54372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:58:29.884605   54372 out.go:368] Setting JSON to false
	I1009 18:58:29.885504   54372 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6050,"bootTime":1760030260,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:58:29.885599   54372 start.go:141] virtualization: kvm guest
	I1009 18:58:29.887772   54372 out.go:179] * [force-systemd-env-866940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:58:29.888974   54372 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:58:29.888981   54372 notify.go:220] Checking for updates...
	I1009 18:58:29.891465   54372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:58:29.892648   54372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:58:29.894080   54372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:29.897419   54372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:58:29.898773   54372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 18:58:29.900598   54372 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:29.900732   54372 config.go:182] Loaded profile config "kubernetes-upgrade-667994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900867   54372 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900971   54372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:58:29.940183   54372 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:58:29.941515   54372 start.go:305] selected driver: kvm2
	I1009 18:58:29.941541   54372 start.go:925] validating driver "kvm2" against <nil>
	I1009 18:58:29.941585   54372 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:58:29.942359   54372 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.942453   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.957181   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.957221   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.972056   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.972112   54372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:58:29.972375   54372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:58:29.972403   54372 cni.go:84] Creating CNI manager for ""
	I1009 18:58:29.972459   54372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:29.972470   54372 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:58:29.972528   54372 start.go:349] cluster config:
	{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:29.972663   54372 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.975331   54372 out.go:179] * Starting "force-systemd-env-866940" primary control-plane node in "force-systemd-env-866940" cluster
	I1009 18:58:28.617442   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:28.618257   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:28.618287   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:28.618661   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:28.618712   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:28.618655   54090 retry.go:31] will retry after 2.048718205s: waiting for domain to come up
	I1009 18:58:30.668860   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:30.669683   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:30.669709   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:30.670246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:30.670315   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:30.670227   54090 retry.go:31] will retry after 2.480631133s: waiting for domain to come up
	I1009 18:58:29.976527   54372 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:29.976597   54372 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:58:29.976609   54372 cache.go:64] Caching tarball of preloaded images
	I1009 18:58:29.976714   54372 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:58:29.976727   54372 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:58:29.976837   54372 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json ...
	I1009 18:58:29.976863   54372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json: {Name:mk06f75730700c1e43a7f0f954227f6cc3fc181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:29.977073   54372 start.go:360] acquireMachinesLock for force-systemd-env-866940: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:58:33.154080   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:33.154827   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:33.154859   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:33.155143   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:33.155182   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:33.155136   54090 retry.go:31] will retry after 2.422416341s: waiting for domain to come up
	I1009 18:58:35.579641   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:35.580224   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:35.580246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:35.580606   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:35.580627   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:35.580578   54090 retry.go:31] will retry after 4.415560096s: waiting for domain to come up
	I1009 18:58:39.440597   52475 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.33579306s)
	I1009 18:58:39.440629   52475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:39.440689   52475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:39.447711   52475 start.go:563] Will wait 60s for crictl version
	I1009 18:58:39.447789   52475 ssh_runner.go:195] Run: which crictl
	I1009 18:58:39.452624   52475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:39.498411   52475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:39.498512   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.529885   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.562952   52475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:58:39.564260   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) Calling .GetIP
	I1009 18:58:39.567702   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568247   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:b2", ip: ""} in network mk-kubernetes-upgrade-667994: {Iface:virbr2 ExpiryTime:2025-10-09 19:56:09 +0000 UTC Type:0 Mac:52:54:00:cc:31:b2 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:kubernetes-upgrade-667994 Clientid:01:52:54:00:cc:31:b2}
	I1009 18:58:39.568281   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined IP address 192.168.50.153 and MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568540   52475 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 18:58:39.573413   52475 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:58:39.573502   52475 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:39.573544   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.623055   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.623085   52475 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:58:39.623145   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.660024   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.660066   52475 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:58:39.660076   52475 kubeadm.go:934] updating node { 192.168.50.153 8443 v1.34.1 crio true true} ...
	I1009 18:58:39.660192   52475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-667994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:58:39.660275   52475 ssh_runner.go:195] Run: crio config
	I1009 18:58:39.710960   52475 cni.go:84] Creating CNI manager for ""
	I1009 18:58:39.710994   52475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:39.711010   52475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:58:39.711045   52475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.153 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-667994 NodeName:kubernetes-upgrade-667994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:58:39.711182   52475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-667994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:58:39.711244   52475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:58:39.725217   52475 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:58:39.725285   52475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:58:39.737633   52475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1009 18:58:39.760544   52475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:58:39.782992   52475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1009 18:58:39.805524   52475 ssh_runner.go:195] Run: grep 192.168.50.153	control-plane.minikube.internal$ /etc/hosts
	I1009 18:58:39.810289   52475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:39.991987   52475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:58:40.016172   52475 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994 for IP: 192.168.50.153
	I1009 18:58:40.016192   52475 certs.go:195] generating shared ca certs ...
	I1009 18:58:40.016208   52475 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:40.016346   52475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:58:40.016383   52475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:58:40.016391   52475 certs.go:257] generating profile certs ...
	I1009 18:58:40.016478   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/client.key
	I1009 18:58:40.016524   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key.c1398b93
	I1009 18:58:40.016583   52475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key
	I1009 18:58:40.016710   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:58:40.016739   52475 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:58:40.016749   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:58:40.016772   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:58:40.016794   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:58:40.016815   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:58:40.016858   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:40.017397   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:58:40.049403   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:58:40.080884   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:58:40.112477   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:58:40.143864   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:58:40.176024   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:58:40.208362   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:58:40.239590   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:58:40.276018   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:58:40.313808   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:58:40.346195   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:58:42.551378   54372 start.go:364] duration metric: took 12.574251915s to acquireMachinesLock for "force-systemd-env-866940"
	I1009 18:58:42.551445   54372 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:58:42.551577   54372 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:58:39.998380   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999086   54061 main.go:141] libmachine: (NoKubernetes-156430) found domain IP: 192.168.61.10
	I1009 18:58:39.999111   54061 main.go:141] libmachine: (NoKubernetes-156430) reserving static IP address...
	I1009 18:58:39.999127   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has current primary IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-156430", mac: "52:54:00:35:84:5d", ip: "192.168.61.10"} in network mk-NoKubernetes-156430
	I1009 18:58:40.260566   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Getting to WaitForSSH function...
	I1009 18:58:40.260617   54061 main.go:141] libmachine: (NoKubernetes-156430) reserved static IP address 192.168.61.10 for domain NoKubernetes-156430
	I1009 18:58:40.260643   54061 main.go:141] libmachine: (NoKubernetes-156430) waiting for SSH...
	I1009 18:58:40.264626   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265277   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.265312   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265489   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH client type: external
	I1009 18:58:40.265523   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa (-rw-------)
	I1009 18:58:40.265550   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:58:40.265563   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | About to run SSH command:
	I1009 18:58:40.265575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | exit 0
	I1009 18:58:40.407821   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | SSH cmd err, output: <nil>: 
	I1009 18:58:40.408193   54061 main.go:141] libmachine: (NoKubernetes-156430) domain creation complete
	I1009 18:58:40.408590   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:40.409303   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409536   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409730   54061 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:58:40.409748   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetState
	I1009 18:58:40.411565   54061 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:58:40.411580   54061 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:58:40.411585   54061 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:58:40.411591   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.414834   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.415447   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415725   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.415952   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416137   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416345   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.416554   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.416871   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.416892   54061 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:58:40.536033   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.536091   54061 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:58:40.536103   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.539601   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540048   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.540083   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540284   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.540461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540600   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540759   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.540932   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.541175   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.541195   54061 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:58:40.668014   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 18:58:40.668182   54061 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:58:40.668202   54061 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:58:40.668214   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668487   54061 buildroot.go:166] provisioning hostname "NoKubernetes-156430"
	I1009 18:58:40.668527   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668825   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.672094   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672562   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.672591   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672839   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.673046   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673223   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.673543   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.673796   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.673811   54061 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-156430 && echo "NoKubernetes-156430" | sudo tee /etc/hostname
	I1009 18:58:40.814131   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-156430
	
	I1009 18:58:40.814166   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.817973   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818494   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.818575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818776   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.819070   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819272   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819482   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.819704   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.819912   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.819928   54061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-156430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-156430/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-156430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:58:40.960331   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.960360   54061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:58:40.960384   54061 buildroot.go:174] setting up certificates
	I1009 18:58:40.960401   54061 provision.go:84] configureAuth start
	I1009 18:58:40.960415   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.960761   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:40.964382   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.964921   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.964954   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.965178   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.968310   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.968870   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.968919   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.969111   54061 provision.go:143] copyHostCerts
	I1009 18:58:40.969145   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969181   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:58:40.969197   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969271   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:58:40.969374   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969393   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:58:40.969398   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969425   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:58:40.969504   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969533   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:58:40.969543   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969586   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:58:40.969702   54061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-156430 san=[127.0.0.1 192.168.61.10 NoKubernetes-156430 localhost minikube]
	I1009 18:58:41.825514   54061 provision.go:177] copyRemoteCerts
	I1009 18:58:41.825595   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:58:41.825625   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:41.828960   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:41.829483   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829699   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:41.829890   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:41.830096   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:41.830253   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:41.925362   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:58:41.925436   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:58:41.956804   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:58:41.956924   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:58:41.989131   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:58:41.989205   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:58:42.020058   54061 provision.go:87] duration metric: took 1.059626183s to configureAuth
	I1009 18:58:42.020089   54061 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:58:42.020303   54061 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:42.020385   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.024034   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.024450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024676   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.024865   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025026   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025234   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.025433   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.025638   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.025653   54061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:58:42.274423   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:58:42.274451   54061 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:58:42.274461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetURL
	I1009 18:58:42.275927   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | using libvirt version 8000000
	I1009 18:58:42.278858   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279256   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.279289   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279476   54061 main.go:141] libmachine: Docker is up and running!
	I1009 18:58:42.279492   54061 main.go:141] libmachine: Reticulating splines...
	I1009 18:58:42.279499   54061 client.go:171] duration metric: took 22.713284182s to LocalClient.Create
	I1009 18:58:42.279522   54061 start.go:167] duration metric: took 22.713359926s to libmachine.API.Create "NoKubernetes-156430"
	I1009 18:58:42.279548   54061 start.go:293] postStartSetup for "NoKubernetes-156430" (driver="kvm2")
	I1009 18:58:42.279558   54061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:58:42.279578   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.279814   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:58:42.279845   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.282285   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282640   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.282674   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282798   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.282976   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.283169   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.283296   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.373337   54061 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:58:42.378514   54061 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:58:42.378548   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:58:42.378618   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:58:42.378713   54061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:58:42.378732   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> /etc/ssl/certs/152632.pem
	I1009 18:58:42.378881   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:58:42.391375   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:42.422367   54061 start.go:296] duration metric: took 142.804384ms for postStartSetup
	I1009 18:58:42.422479   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:42.423258   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.426192   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426499   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.426529   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426863   54061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/NoKubernetes-156430/config.json ...
	I1009 18:58:42.427143   54061 start.go:128] duration metric: took 22.88324393s to createHost
	I1009 18:58:42.427175   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.429891   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430321   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.430350   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430554   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.430735   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.430866   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.431027   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.431224   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.431461   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.431473   54061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:58:42.551194   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760036322.526817929
	
	I1009 18:58:42.551223   54061 fix.go:216] guest clock: 1760036322.526817929
	I1009 18:58:42.551235   54061 fix.go:229] Guest: 2025-10-09 18:58:42.526817929 +0000 UTC Remote: 2025-10-09 18:58:42.427160398 +0000 UTC m=+24.708548246 (delta=99.657531ms)
	I1009 18:58:42.551280   54061 fix.go:200] guest clock delta is within tolerance: 99.657531ms
	I1009 18:58:42.551289   54061 start.go:83] releasing machines lock for "NoKubernetes-156430", held for 23.007526235s
	I1009 18:58:42.551317   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.551599   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.555353   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.555871   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.555908   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.556160   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556731   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556904   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556998   54061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:58:42.557069   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.557138   54061 ssh_runner.go:195] Run: cat /version.json
	I1009 18:58:42.557165   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.560586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.560975   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561008   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561033   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561193   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.561393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.561594   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.561636   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561916   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.562392   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.562797   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.563244   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.563412   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.563532   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.687591   54061 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:42.696846   54061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:58:42.860249   54061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:58:42.867451   54061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:58:42.867517   54061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:58:42.897113   54061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:58:42.897141   54061 start.go:495] detecting cgroup driver to use...
	I1009 18:58:42.897220   54061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:58:42.919672   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:58:42.942589   54061 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:58:42.942699   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:58:42.965057   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:58:42.983975   54061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:58:43.208244   54061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:58:43.409851   54061 docker.go:234] disabling docker service ...
	I1009 18:58:43.409937   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:58:43.431496   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:58:43.449349   54061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:58:43.713575   54061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:58:43.917104   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:58:43.940403   54061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:58:43.966987   54061 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:58:43.967054   54061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 18:58:43.967114   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.985635   54061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:58:43.985708   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.999934   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.014371   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.031915   54061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:58:44.047615   54061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:58:44.060030   54061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:58:44.060125   54061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:58:44.088348   54061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:58:44.105749   54061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:44.276388   54061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:58:44.400747   54061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:44.400833   54061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:44.408292   54061 start.go:563] Will wait 60s for crictl version
	I1009 18:58:44.408361   54061 ssh_runner.go:195] Run: which crictl
	I1009 18:58:44.413380   54061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:44.465676   54061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:44.465768   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.505682   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.550424   54061 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1009 18:58:44.551824   54061 ssh_runner.go:195] Run: rm -f paused
	I1009 18:58:44.558855   54061 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:58:44.562268   54061 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:58:42.553872   54372 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 18:58:42.554150   54372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:42.554213   54372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:42.573562   54372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1009 18:58:42.574189   54372 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:42.574878   54372 main.go:141] libmachine: Using API Version  1
	I1009 18:58:42.574909   54372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:42.575408   54372 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:42.575629   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetMachineName
	I1009 18:58:42.575811   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .DriverName
	I1009 18:58:42.575965   54372 start.go:159] libmachine.API.Create for "force-systemd-env-866940" (driver="kvm2")
	I1009 18:58:42.575996   54372 client.go:168] LocalClient.Create starting
	I1009 18:58:42.576048   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem
	I1009 18:58:42.576104   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576129   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576200   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem
	I1009 18:58:42.576230   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576251   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576284   54372 main.go:141] libmachine: Running pre-create checks...
	I1009 18:58:42.576307   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .PreCreateCheck
	I1009 18:58:42.576640   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetConfigRaw
	I1009 18:58:42.577094   54372 main.go:141] libmachine: Creating machine...
	I1009 18:58:42.577109   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .Create
	I1009 18:58:42.577271   54372 main.go:141] libmachine: (force-systemd-env-866940) creating domain...
	I1009 18:58:42.577292   54372 main.go:141] libmachine: (force-systemd-env-866940) creating network...
	I1009 18:58:42.578684   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | found existing default network
	I1009 18:58:42.578863   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network connections='3'>
	I1009 18:58:42.578882   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>default</name>
	I1009 18:58:42.578894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 18:58:42.578906   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <forward mode='nat'>
	I1009 18:58:42.578936   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <nat>
	I1009 18:58:42.578959   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <port start='1024' end='65535'/>
	I1009 18:58:42.578972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </nat>
	I1009 18:58:42.578983   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </forward>
	I1009 18:58:42.578993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 18:58:42.579013   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 18:58:42.579030   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 18:58:42.579055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.579074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 18:58:42.579083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.579091   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.579099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.579106   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.579960   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.579788   54509 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:eb:8e} reservation:<nil>}
	I1009 18:58:42.580630   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.580543   54509 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:2a:69} reservation:<nil>}
	I1009 18:58:42.581452   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.581375   54509 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:cd:a4} reservation:<nil>}
	I1009 18:58:42.582428   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.582299   54509 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003429c0}
	I1009 18:58:42.582456   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | defining private network:
	I1009 18:58:42.582477   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.582489   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.582499   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.582514   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.582525   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.582535   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.582546   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.582560   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.582572   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.582579   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.582591   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.588855   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | creating private network mk-force-systemd-env-866940 192.168.72.0/24...
	I1009 18:58:42.674549   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | private network mk-force-systemd-env-866940 192.168.72.0/24 created
	I1009 18:58:42.674894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.674926   54372 main.go:141] libmachine: (force-systemd-env-866940) setting up store path in /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:42.674935   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.674947   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>e017ca39-b131-46c7-8a35-2b8acbb67618</uuid>
	I1009 18:58:42.674955   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1009 18:58:42.674964   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:e1:bc:8c'/>
	I1009 18:58:42.674976   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.674986   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.675005   54372 main.go:141] libmachine: (force-systemd-env-866940) building disk image from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:58:42.675015   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.675024   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.675033   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.675055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.675096   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.675125   54372 main.go:141] libmachine: (force-systemd-env-866940) Downloading /home/jenkins/minikube-integration/21139-11352/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 18:58:42.675139   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.675179   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.674877   54509 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:42.935427   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.935240   54509 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/id_rsa...
	I1009 18:58:43.757919   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757713   54509 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk...
	I1009 18:58:43.757972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing magic tar header
	I1009 18:58:43.757993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing SSH key tar header
	I1009 18:58:43.758008   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757830   54509 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:43.758027   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 (perms=drwx------)
	I1009 18:58:43.758063   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:58:43.758078   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube (perms=drwxr-xr-x)
	I1009 18:58:43.758093   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940
	I1009 18:58:43.758110   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines
	I1009 18:58:43.758123   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:43.758144   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352
	I1009 18:58:43.758157   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:58:43.758172   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352 (perms=drwxrwxr-x)
	I1009 18:58:43.758183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins
	I1009 18:58:43.758195   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home
	I1009 18:58:43.758208   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:58:43.758221   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:58:43.758239   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain...
	I1009 18:58:43.758248   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | skipping /home - not owner
	I1009 18:58:43.759588   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain using XML: 
	I1009 18:58:43.759617   54372 main.go:141] libmachine: (force-systemd-env-866940) <domain type='kvm'>
	I1009 18:58:43.759630   54372 main.go:141] libmachine: (force-systemd-env-866940)   <name>force-systemd-env-866940</name>
	I1009 18:58:43.759642   54372 main.go:141] libmachine: (force-systemd-env-866940)   <memory unit='MiB'>3072</memory>
	I1009 18:58:43.759656   54372 main.go:141] libmachine: (force-systemd-env-866940)   <vcpu>2</vcpu>
	I1009 18:58:43.759667   54372 main.go:141] libmachine: (force-systemd-env-866940)   <features>
	I1009 18:58:43.759680   54372 main.go:141] libmachine: (force-systemd-env-866940)     <acpi/>
	I1009 18:58:43.759686   54372 main.go:141] libmachine: (force-systemd-env-866940)     <apic/>
	I1009 18:58:43.759695   54372 main.go:141] libmachine: (force-systemd-env-866940)     <pae/>
	I1009 18:58:43.759700   54372 main.go:141] libmachine: (force-systemd-env-866940)   </features>
	I1009 18:58:43.759710   54372 main.go:141] libmachine: (force-systemd-env-866940)   <cpu mode='host-passthrough'>
	I1009 18:58:43.759720   54372 main.go:141] libmachine: (force-systemd-env-866940)   </cpu>
	I1009 18:58:43.759728   54372 main.go:141] libmachine: (force-systemd-env-866940)   <os>
	I1009 18:58:43.759738   54372 main.go:141] libmachine: (force-systemd-env-866940)     <type>hvm</type>
	I1009 18:58:43.759778   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='cdrom'/>
	I1009 18:58:43.759807   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='hd'/>
	I1009 18:58:43.759817   54372 main.go:141] libmachine: (force-systemd-env-866940)     <bootmenu enable='no'/>
	I1009 18:58:43.759827   54372 main.go:141] libmachine: (force-systemd-env-866940)   </os>
	I1009 18:58:43.759841   54372 main.go:141] libmachine: (force-systemd-env-866940)   <devices>
	I1009 18:58:43.759855   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='cdrom'>
	I1009 18:58:43.759874   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.759892   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.759904   54372 main.go:141] libmachine: (force-systemd-env-866940)       <readonly/>
	I1009 18:58:43.759917   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.759931   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='disk'>
	I1009 18:58:43.759949   54372 main.go:141] libmachine: (force-systemd-env-866940)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:58:43.759967   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.759981   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.759994   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.760007   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760019   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.760049   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760077   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760096   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760108   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='default'/>
	I1009 18:58:43.760115   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760124   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760134   54372 main.go:141] libmachine: (force-systemd-env-866940)     <serial type='pty'>
	I1009 18:58:43.760143   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target port='0'/>
	I1009 18:58:43.760157   54372 main.go:141] libmachine: (force-systemd-env-866940)     </serial>
	I1009 18:58:43.760170   54372 main.go:141] libmachine: (force-systemd-env-866940)     <console type='pty'>
	I1009 18:58:43.760181   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target type='serial' port='0'/>
	I1009 18:58:43.760193   54372 main.go:141] libmachine: (force-systemd-env-866940)     </console>
	I1009 18:58:43.760203   54372 main.go:141] libmachine: (force-systemd-env-866940)     <rng model='virtio'>
	I1009 18:58:43.760213   54372 main.go:141] libmachine: (force-systemd-env-866940)       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.760223   54372 main.go:141] libmachine: (force-systemd-env-866940)     </rng>
	I1009 18:58:43.760236   54372 main.go:141] libmachine: (force-systemd-env-866940)   </devices>
	I1009 18:58:43.760249   54372 main.go:141] libmachine: (force-systemd-env-866940) </domain>
	I1009 18:58:43.760272   54372 main.go:141] libmachine: (force-systemd-env-866940) 
	I1009 18:58:43.765904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:78:a8:f7 in network default
	I1009 18:58:43.766797   54372 main.go:141] libmachine: (force-systemd-env-866940) starting domain...
	I1009 18:58:43.766823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:3d:b9:89 in network mk-force-systemd-env-866940
	I1009 18:58:43.766833   54372 main.go:141] libmachine: (force-systemd-env-866940) ensuring networks are active...
	I1009 18:58:43.768013   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network default is active
	I1009 18:58:43.768563   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network mk-force-systemd-env-866940 is active
	I1009 18:58:43.769446   54372 main.go:141] libmachine: (force-systemd-env-866940) getting domain XML...
	I1009 18:58:43.770823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | starting domain XML:
	I1009 18:58:43.770904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <domain type='kvm'>
	I1009 18:58:43.770920   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>force-systemd-env-866940</name>
	I1009 18:58:43.770928   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>01280892-0a35-436e-8b77-3f763c9a68f6</uuid>
	I1009 18:58:43.770945   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 18:58:43.770952   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 18:58:43.770961   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:58:43.770967   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <os>
	I1009 18:58:43.770977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:58:43.770985   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='cdrom'/>
	I1009 18:58:43.770993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='hd'/>
	I1009 18:58:43.771001   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <bootmenu enable='no'/>
	I1009 18:58:43.771010   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </os>
	I1009 18:58:43.771017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <features>
	I1009 18:58:43.771059   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <acpi/>
	I1009 18:58:43.771083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <apic/>
	I1009 18:58:43.771099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <pae/>
	I1009 18:58:43.771107   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </features>
	I1009 18:58:43.771122   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:58:43.771131   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <clock offset='utc'/>
	I1009 18:58:43.771151   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:58:43.771163   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:58:43.771189   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_crash>destroy</on_crash>
	I1009 18:58:43.771268   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <devices>
	I1009 18:58:43.772871   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:58:43.772899   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='cdrom'>
	I1009 18:58:43.772910   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:58:43.772924   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.772932   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.772941   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <readonly/>
	I1009 18:58:43.772950   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:58:43.772958   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.772966   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='disk'>
	I1009 18:58:43.772977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:58:43.772991   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.773017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.773051   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:58:43.773065   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.773074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:58:43.773087   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:58:43.773095   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773108   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:58:43.773124   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:58:43.773138   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:58:43.773148   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773160   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773170   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:3d:b9:89'/>
	I1009 18:58:43.773183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.773193   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773207   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:58:43.773217   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773233   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773243   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:78:a8:f7'/>
	I1009 18:58:43.773260   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='default'/>
	I1009 18:58:43.773270   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773284   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:58:43.773293   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773305   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <serial type='pty'>
	I1009 18:58:43.773315   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='isa-serial' port='0'>
	I1009 18:58:43.773327   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |         <model name='isa-serial'/>
	I1009 18:58:43.773336   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       </target>
	I1009 18:58:43.773347   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </serial>
	I1009 18:58:43.773356   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <console type='pty'>
	I1009 18:58:43.773367   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='serial' port='0'/>
	I1009 18:58:43.773376   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </console>
	I1009 18:58:43.773388   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:58:43.773397   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:58:43.773409   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <audio id='1' type='none'/>
	I1009 18:58:43.773419   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <memballoon model='virtio'>
	I1009 18:58:43.773433   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:58:43.773442   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </memballoon>
	I1009 18:58:43.773450   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <rng model='virtio'>
	I1009 18:58:43.773459   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.773469   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:58:43.773476   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </rng>
	I1009 18:58:43.773503   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </devices>
	I1009 18:58:43.773510   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </domain>
	I1009 18:58:43.773521   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:44.696815   53754 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: (20.623250496s)
	W1009 18:58:44.696912   53754 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: Process exited with status 1
	stdout:
	62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80
	3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7
	6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b
	e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279
	b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963
	d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0
	
	stderr:
	E1009 18:58:44.690910    3543 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist" containerID="a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091"
	time="2025-10-09T18:58:44Z" level=fatal msg="stopping the container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist"
	I1009 18:58:44.697010   53754 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:58:44.749170   53754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:58:44.767682   53754 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  9 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Oct  9 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  9 18:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  9 18:57 /etc/kubernetes/scheduler.conf
	
	I1009 18:58:44.767749   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:58:44.781871   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:58:44.796528   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.796591   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:58:44.813206   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.829983   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.830071   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.847176   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:58:44.860411   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.860489   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:58:44.878975   53754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:58:44.899219   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:44.970605   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:40.378956   52475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:58:40.400540   52475 ssh_runner.go:195] Run: openssl version
	I1009 18:58:40.409075   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:58:40.424861   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430830   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430906   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.439375   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:58:40.456353   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15263.pem && ln -fs /usr/share/ca-certificates/15263.pem /etc/ssl/certs/15263.pem"
	I1009 18:58:40.470688   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476162   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:07 /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476231   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.483753   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15263.pem /etc/ssl/certs/51391683.0"
	I1009 18:58:40.496302   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152632.pem && ln -fs /usr/share/ca-certificates/152632.pem /etc/ssl/certs/152632.pem"
	I1009 18:58:40.513453   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519452   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:07 /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519520   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.527391   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152632.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:58:40.541953   52475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:58:40.548470   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:58:40.557767   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:58:40.565517   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:58:40.572929   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:58:40.580621   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:58:40.588071   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:58:40.597465   52475 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:40.597566   52475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:58:40.597631   52475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:58:40.642580   52475 cri.go:89] found id: "7b1bfb45a3eaace18d65de2587497b219bf6e3cd798d8c48e231bf1ad257e307"
	I1009 18:58:40.642607   52475 cri.go:89] found id: "20555dbc4eb6b0003b9e7120a568ec710f3b9cfc6a9dbc465b148e97555bf3d3"
	I1009 18:58:40.642612   52475 cri.go:89] found id: "5786f8dd0474b8a2ef87443eeee952136aadfd10370f92cf37e07541a02b70a5"
	I1009 18:58:40.642617   52475 cri.go:89] found id: "768cac5af370455dc385009f432c0d63f62e02688e116b2dec23e64f0894578b"
	I1009 18:58:40.642621   52475 cri.go:89] found id: "d3cfd4255a6edb3154603d5b3ff89b637d21671a133fcc83891af4f6e8a205c4"
	I1009 18:58:40.642624   52475 cri.go:89] found id: "252fc791a47bf2869efe267657a31dc52be38eae30346683b37a301f9ccb7490"
	I1009 18:58:40.642627   52475 cri.go:89] found id: "4593ed25c35b4d5c00b32b02fce74c71137e47c7a00fa840eb6effa737df9cf1"
	I1009 18:58:40.642629   52475 cri.go:89] found id: "3cc8ccc81072eaaa74daa572753c0a6a4c48f52fc71a6775c657b8c33f125b68"
	I1009 18:58:40.642632   52475 cri.go:89] found id: "c1d305c91f1ec6f697cc71695ff4555d0777627b35a9cb3a117ce4ac8070ead5"
	I1009 18:58:40.642639   52475 cri.go:89] found id: "19edec96082f50e67d6381b4cc16aa130713dd9bb9ac86be629415033f890dec"
	I1009 18:58:40.642642   52475 cri.go:89] found id: "ed26a33c61e3ffc9c91ce839a3b1b8244dd3f2f0c615041ef3194575deec434c"
	I1009 18:58:40.642644   52475 cri.go:89] found id: ""
	I1009 18:58:40.642687   52475 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-706613 -n pause-706613
helpers_test.go:269: (dbg) Run:  kubectl --context pause-706613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-706613 -n pause-706613
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-706613 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-706613 logs -n 25: (1.790884117s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-852620    │ jenkins │ v1.32.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ -p kubernetes-upgrade-667994                                                                                                                                       │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ stopped-upgrade-644281 stop                                                                                                                                        │ stopped-upgrade-644281    │ jenkins │ v1.32.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p offline-crio-636274                                                                                                                                             │ offline-crio-636274       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-644281 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ delete  │ -p stopped-upgrade-644281                                                                                                                                          │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-852620 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p running-upgrade-852620                                                                                                                                          │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:59 UTC │
	│ delete  │ -p NoKubernetes-156430                                                                                                                                             │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ ssh     │ force-systemd-flag-026602 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ delete  │ -p force-systemd-flag-026602                                                                                                                                       │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-866940  │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	│ ssh     │ -p NoKubernetes-156430 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:58:29.883638   54372 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:58:29.883879   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.883887   54372 out.go:374] Setting ErrFile to fd 2...
	I1009 18:58:29.883891   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.884100   54372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:58:29.884605   54372 out.go:368] Setting JSON to false
	I1009 18:58:29.885504   54372 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6050,"bootTime":1760030260,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:58:29.885599   54372 start.go:141] virtualization: kvm guest
	I1009 18:58:29.887772   54372 out.go:179] * [force-systemd-env-866940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:58:29.888974   54372 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:58:29.888981   54372 notify.go:220] Checking for updates...
	I1009 18:58:29.891465   54372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:58:29.892648   54372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:58:29.894080   54372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:29.897419   54372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:58:29.898773   54372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 18:58:29.900598   54372 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:29.900732   54372 config.go:182] Loaded profile config "kubernetes-upgrade-667994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900867   54372 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900971   54372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:58:29.940183   54372 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:58:29.941515   54372 start.go:305] selected driver: kvm2
	I1009 18:58:29.941541   54372 start.go:925] validating driver "kvm2" against <nil>
	I1009 18:58:29.941585   54372 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:58:29.942359   54372 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.942453   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.957181   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.957221   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.972056   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.972112   54372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:58:29.972375   54372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:58:29.972403   54372 cni.go:84] Creating CNI manager for ""
	I1009 18:58:29.972459   54372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:29.972470   54372 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:58:29.972528   54372 start.go:349] cluster config:
	{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:29.972663   54372 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.975331   54372 out.go:179] * Starting "force-systemd-env-866940" primary control-plane node in "force-systemd-env-866940" cluster
	I1009 18:58:28.617442   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:28.618257   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:28.618287   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:28.618661   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:28.618712   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:28.618655   54090 retry.go:31] will retry after 2.048718205s: waiting for domain to come up
	I1009 18:58:30.668860   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:30.669683   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:30.669709   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:30.670246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:30.670315   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:30.670227   54090 retry.go:31] will retry after 2.480631133s: waiting for domain to come up
	I1009 18:58:29.976527   54372 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:29.976597   54372 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:58:29.976609   54372 cache.go:64] Caching tarball of preloaded images
	I1009 18:58:29.976714   54372 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:58:29.976727   54372 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:58:29.976837   54372 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json ...
	I1009 18:58:29.976863   54372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json: {Name:mk06f75730700c1e43a7f0f954227f6cc3fc181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:29.977073   54372 start.go:360] acquireMachinesLock for force-systemd-env-866940: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:58:33.154080   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:33.154827   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:33.154859   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:33.155143   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:33.155182   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:33.155136   54090 retry.go:31] will retry after 2.422416341s: waiting for domain to come up
	I1009 18:58:35.579641   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:35.580224   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:35.580246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:35.580606   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:35.580627   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:35.580578   54090 retry.go:31] will retry after 4.415560096s: waiting for domain to come up
	I1009 18:58:39.440597   52475 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.33579306s)
	I1009 18:58:39.440629   52475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:39.440689   52475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:39.447711   52475 start.go:563] Will wait 60s for crictl version
	I1009 18:58:39.447789   52475 ssh_runner.go:195] Run: which crictl
	I1009 18:58:39.452624   52475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:39.498411   52475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:39.498512   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.529885   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.562952   52475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:58:39.564260   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) Calling .GetIP
	I1009 18:58:39.567702   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568247   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:b2", ip: ""} in network mk-kubernetes-upgrade-667994: {Iface:virbr2 ExpiryTime:2025-10-09 19:56:09 +0000 UTC Type:0 Mac:52:54:00:cc:31:b2 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:kubernetes-upgrade-667994 Clientid:01:52:54:00:cc:31:b2}
	I1009 18:58:39.568281   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined IP address 192.168.50.153 and MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568540   52475 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 18:58:39.573413   52475 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:58:39.573502   52475 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:39.573544   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.623055   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.623085   52475 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:58:39.623145   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.660024   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.660066   52475 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:58:39.660076   52475 kubeadm.go:934] updating node { 192.168.50.153 8443 v1.34.1 crio true true} ...
	I1009 18:58:39.660192   52475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-667994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:58:39.660275   52475 ssh_runner.go:195] Run: crio config
	I1009 18:58:39.710960   52475 cni.go:84] Creating CNI manager for ""
	I1009 18:58:39.710994   52475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:39.711010   52475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:58:39.711045   52475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.153 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-667994 NodeName:kubernetes-upgrade-667994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:58:39.711182   52475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-667994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:58:39.711244   52475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:58:39.725217   52475 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:58:39.725285   52475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:58:39.737633   52475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1009 18:58:39.760544   52475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:58:39.782992   52475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1009 18:58:39.805524   52475 ssh_runner.go:195] Run: grep 192.168.50.153	control-plane.minikube.internal$ /etc/hosts
	I1009 18:58:39.810289   52475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:39.991987   52475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:58:40.016172   52475 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994 for IP: 192.168.50.153
	I1009 18:58:40.016192   52475 certs.go:195] generating shared ca certs ...
	I1009 18:58:40.016208   52475 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:40.016346   52475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:58:40.016383   52475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:58:40.016391   52475 certs.go:257] generating profile certs ...
	I1009 18:58:40.016478   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/client.key
	I1009 18:58:40.016524   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key.c1398b93
	I1009 18:58:40.016583   52475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key
	I1009 18:58:40.016710   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:58:40.016739   52475 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:58:40.016749   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:58:40.016772   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:58:40.016794   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:58:40.016815   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:58:40.016858   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:40.017397   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:58:40.049403   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:58:40.080884   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:58:40.112477   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:58:40.143864   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:58:40.176024   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:58:40.208362   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:58:40.239590   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:58:40.276018   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:58:40.313808   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:58:40.346195   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:58:42.551378   54372 start.go:364] duration metric: took 12.574251915s to acquireMachinesLock for "force-systemd-env-866940"
	I1009 18:58:42.551445   54372 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:58:42.551577   54372 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:58:39.998380   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999086   54061 main.go:141] libmachine: (NoKubernetes-156430) found domain IP: 192.168.61.10
	I1009 18:58:39.999111   54061 main.go:141] libmachine: (NoKubernetes-156430) reserving static IP address...
	I1009 18:58:39.999127   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has current primary IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-156430", mac: "52:54:00:35:84:5d", ip: "192.168.61.10"} in network mk-NoKubernetes-156430
	I1009 18:58:40.260566   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Getting to WaitForSSH function...
	I1009 18:58:40.260617   54061 main.go:141] libmachine: (NoKubernetes-156430) reserved static IP address 192.168.61.10 for domain NoKubernetes-156430
	I1009 18:58:40.260643   54061 main.go:141] libmachine: (NoKubernetes-156430) waiting for SSH...
	I1009 18:58:40.264626   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265277   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.265312   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265489   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH client type: external
	I1009 18:58:40.265523   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa (-rw-------)
	I1009 18:58:40.265550   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:58:40.265563   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | About to run SSH command:
	I1009 18:58:40.265575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | exit 0
	I1009 18:58:40.407821   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | SSH cmd err, output: <nil>: 
	I1009 18:58:40.408193   54061 main.go:141] libmachine: (NoKubernetes-156430) domain creation complete
	I1009 18:58:40.408590   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:40.409303   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409536   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409730   54061 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:58:40.409748   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetState
	I1009 18:58:40.411565   54061 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:58:40.411580   54061 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:58:40.411585   54061 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:58:40.411591   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.414834   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.415447   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415725   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.415952   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416137   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416345   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.416554   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.416871   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.416892   54061 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:58:40.536033   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.536091   54061 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:58:40.536103   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.539601   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540048   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.540083   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540284   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.540461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540600   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540759   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.540932   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.541175   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.541195   54061 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:58:40.668014   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 18:58:40.668182   54061 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:58:40.668202   54061 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:58:40.668214   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668487   54061 buildroot.go:166] provisioning hostname "NoKubernetes-156430"
	I1009 18:58:40.668527   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668825   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.672094   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672562   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.672591   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672839   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.673046   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673223   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.673543   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.673796   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.673811   54061 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-156430 && echo "NoKubernetes-156430" | sudo tee /etc/hostname
	I1009 18:58:40.814131   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-156430
	
	I1009 18:58:40.814166   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.817973   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818494   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.818575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818776   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.819070   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819272   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819482   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.819704   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.819912   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.819928   54061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-156430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-156430/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-156430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:58:40.960331   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.960360   54061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:58:40.960384   54061 buildroot.go:174] setting up certificates
	I1009 18:58:40.960401   54061 provision.go:84] configureAuth start
	I1009 18:58:40.960415   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.960761   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:40.964382   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.964921   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.964954   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.965178   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.968310   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.968870   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.968919   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.969111   54061 provision.go:143] copyHostCerts
	I1009 18:58:40.969145   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969181   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:58:40.969197   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969271   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:58:40.969374   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969393   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:58:40.969398   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969425   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:58:40.969504   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969533   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:58:40.969543   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969586   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:58:40.969702   54061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-156430 san=[127.0.0.1 192.168.61.10 NoKubernetes-156430 localhost minikube]
	I1009 18:58:41.825514   54061 provision.go:177] copyRemoteCerts
	I1009 18:58:41.825595   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:58:41.825625   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:41.828960   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:41.829483   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829699   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:41.829890   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:41.830096   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:41.830253   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:41.925362   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:58:41.925436   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:58:41.956804   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:58:41.956924   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:58:41.989131   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:58:41.989205   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:58:42.020058   54061 provision.go:87] duration metric: took 1.059626183s to configureAuth
	I1009 18:58:42.020089   54061 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:58:42.020303   54061 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:42.020385   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.024034   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.024450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024676   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.024865   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025026   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025234   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.025433   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.025638   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.025653   54061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:58:42.274423   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:58:42.274451   54061 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:58:42.274461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetURL
	I1009 18:58:42.275927   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | using libvirt version 8000000
	I1009 18:58:42.278858   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279256   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.279289   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279476   54061 main.go:141] libmachine: Docker is up and running!
	I1009 18:58:42.279492   54061 main.go:141] libmachine: Reticulating splines...
	I1009 18:58:42.279499   54061 client.go:171] duration metric: took 22.713284182s to LocalClient.Create
	I1009 18:58:42.279522   54061 start.go:167] duration metric: took 22.713359926s to libmachine.API.Create "NoKubernetes-156430"
	I1009 18:58:42.279548   54061 start.go:293] postStartSetup for "NoKubernetes-156430" (driver="kvm2")
	I1009 18:58:42.279558   54061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:58:42.279578   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.279814   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:58:42.279845   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.282285   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282640   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.282674   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282798   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.282976   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.283169   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.283296   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.373337   54061 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:58:42.378514   54061 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:58:42.378548   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:58:42.378618   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:58:42.378713   54061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:58:42.378732   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> /etc/ssl/certs/152632.pem
	I1009 18:58:42.378881   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:58:42.391375   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:42.422367   54061 start.go:296] duration metric: took 142.804384ms for postStartSetup
	I1009 18:58:42.422479   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:42.423258   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.426192   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426499   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.426529   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426863   54061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/NoKubernetes-156430/config.json ...
	I1009 18:58:42.427143   54061 start.go:128] duration metric: took 22.88324393s to createHost
	I1009 18:58:42.427175   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.429891   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430321   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.430350   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430554   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.430735   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.430866   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.431027   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.431224   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.431461   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.431473   54061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:58:42.551194   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760036322.526817929
	
	I1009 18:58:42.551223   54061 fix.go:216] guest clock: 1760036322.526817929
	I1009 18:58:42.551235   54061 fix.go:229] Guest: 2025-10-09 18:58:42.526817929 +0000 UTC Remote: 2025-10-09 18:58:42.427160398 +0000 UTC m=+24.708548246 (delta=99.657531ms)
	I1009 18:58:42.551280   54061 fix.go:200] guest clock delta is within tolerance: 99.657531ms
	I1009 18:58:42.551289   54061 start.go:83] releasing machines lock for "NoKubernetes-156430", held for 23.007526235s
	I1009 18:58:42.551317   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.551599   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.555353   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.555871   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.555908   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.556160   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556731   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556904   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556998   54061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:58:42.557069   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.557138   54061 ssh_runner.go:195] Run: cat /version.json
	I1009 18:58:42.557165   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.560586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.560975   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561008   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561033   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561193   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.561393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.561594   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.561636   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561916   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.562392   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.562797   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.563244   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.563412   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.563532   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.687591   54061 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:42.696846   54061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:58:42.860249   54061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:58:42.867451   54061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:58:42.867517   54061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:58:42.897113   54061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:58:42.897141   54061 start.go:495] detecting cgroup driver to use...
	I1009 18:58:42.897220   54061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:58:42.919672   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:58:42.942589   54061 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:58:42.942699   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:58:42.965057   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:58:42.983975   54061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:58:43.208244   54061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:58:43.409851   54061 docker.go:234] disabling docker service ...
	I1009 18:58:43.409937   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:58:43.431496   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:58:43.449349   54061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:58:43.713575   54061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:58:43.917104   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:58:43.940403   54061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:58:43.966987   54061 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:58:43.967054   54061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 18:58:43.967114   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.985635   54061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:58:43.985708   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.999934   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.014371   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.031915   54061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:58:44.047615   54061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:58:44.060030   54061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:58:44.060125   54061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:58:44.088348   54061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:58:44.105749   54061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:44.276388   54061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:58:44.400747   54061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:44.400833   54061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:44.408292   54061 start.go:563] Will wait 60s for crictl version
	I1009 18:58:44.408361   54061 ssh_runner.go:195] Run: which crictl
	I1009 18:58:44.413380   54061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:44.465676   54061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:44.465768   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.505682   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.550424   54061 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1009 18:58:44.551824   54061 ssh_runner.go:195] Run: rm -f paused
	I1009 18:58:44.558855   54061 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:58:44.562268   54061 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:58:42.553872   54372 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 18:58:42.554150   54372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:42.554213   54372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:42.573562   54372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1009 18:58:42.574189   54372 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:42.574878   54372 main.go:141] libmachine: Using API Version  1
	I1009 18:58:42.574909   54372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:42.575408   54372 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:42.575629   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetMachineName
	I1009 18:58:42.575811   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .DriverName
	I1009 18:58:42.575965   54372 start.go:159] libmachine.API.Create for "force-systemd-env-866940" (driver="kvm2")
	I1009 18:58:42.575996   54372 client.go:168] LocalClient.Create starting
	I1009 18:58:42.576048   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem
	I1009 18:58:42.576104   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576129   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576200   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem
	I1009 18:58:42.576230   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576251   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576284   54372 main.go:141] libmachine: Running pre-create checks...
	I1009 18:58:42.576307   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .PreCreateCheck
	I1009 18:58:42.576640   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetConfigRaw
	I1009 18:58:42.577094   54372 main.go:141] libmachine: Creating machine...
	I1009 18:58:42.577109   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .Create
	I1009 18:58:42.577271   54372 main.go:141] libmachine: (force-systemd-env-866940) creating domain...
	I1009 18:58:42.577292   54372 main.go:141] libmachine: (force-systemd-env-866940) creating network...
	I1009 18:58:42.578684   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | found existing default network
	I1009 18:58:42.578863   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network connections='3'>
	I1009 18:58:42.578882   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>default</name>
	I1009 18:58:42.578894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 18:58:42.578906   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <forward mode='nat'>
	I1009 18:58:42.578936   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <nat>
	I1009 18:58:42.578959   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <port start='1024' end='65535'/>
	I1009 18:58:42.578972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </nat>
	I1009 18:58:42.578983   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </forward>
	I1009 18:58:42.578993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 18:58:42.579013   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 18:58:42.579030   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 18:58:42.579055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.579074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 18:58:42.579083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.579091   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.579099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.579106   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.579960   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.579788   54509 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:eb:8e} reservation:<nil>}
	I1009 18:58:42.580630   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.580543   54509 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:2a:69} reservation:<nil>}
	I1009 18:58:42.581452   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.581375   54509 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:cd:a4} reservation:<nil>}
	I1009 18:58:42.582428   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.582299   54509 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003429c0}
	I1009 18:58:42.582456   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | defining private network:
	I1009 18:58:42.582477   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.582489   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.582499   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.582514   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.582525   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.582535   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.582546   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.582560   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.582572   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.582579   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.582591   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.588855   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | creating private network mk-force-systemd-env-866940 192.168.72.0/24...
	I1009 18:58:42.674549   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | private network mk-force-systemd-env-866940 192.168.72.0/24 created
	I1009 18:58:42.674894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.674926   54372 main.go:141] libmachine: (force-systemd-env-866940) setting up store path in /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:42.674935   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.674947   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>e017ca39-b131-46c7-8a35-2b8acbb67618</uuid>
	I1009 18:58:42.674955   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1009 18:58:42.674964   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:e1:bc:8c'/>
	I1009 18:58:42.674976   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.674986   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.675005   54372 main.go:141] libmachine: (force-systemd-env-866940) building disk image from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:58:42.675015   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.675024   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.675033   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.675055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.675096   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.675125   54372 main.go:141] libmachine: (force-systemd-env-866940) Downloading /home/jenkins/minikube-integration/21139-11352/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 18:58:42.675139   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.675179   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.674877   54509 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:42.935427   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.935240   54509 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/id_rsa...
	I1009 18:58:43.757919   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757713   54509 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk...
	I1009 18:58:43.757972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing magic tar header
	I1009 18:58:43.757993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing SSH key tar header
	I1009 18:58:43.758008   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757830   54509 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:43.758027   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 (perms=drwx------)
	I1009 18:58:43.758063   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:58:43.758078   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube (perms=drwxr-xr-x)
	I1009 18:58:43.758093   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940
	I1009 18:58:43.758110   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines
	I1009 18:58:43.758123   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:43.758144   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352
	I1009 18:58:43.758157   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:58:43.758172   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352 (perms=drwxrwxr-x)
	I1009 18:58:43.758183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins
	I1009 18:58:43.758195   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home
	I1009 18:58:43.758208   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:58:43.758221   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:58:43.758239   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain...
	I1009 18:58:43.758248   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | skipping /home - not owner
	I1009 18:58:43.759588   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain using XML: 
	I1009 18:58:43.759617   54372 main.go:141] libmachine: (force-systemd-env-866940) <domain type='kvm'>
	I1009 18:58:43.759630   54372 main.go:141] libmachine: (force-systemd-env-866940)   <name>force-systemd-env-866940</name>
	I1009 18:58:43.759642   54372 main.go:141] libmachine: (force-systemd-env-866940)   <memory unit='MiB'>3072</memory>
	I1009 18:58:43.759656   54372 main.go:141] libmachine: (force-systemd-env-866940)   <vcpu>2</vcpu>
	I1009 18:58:43.759667   54372 main.go:141] libmachine: (force-systemd-env-866940)   <features>
	I1009 18:58:43.759680   54372 main.go:141] libmachine: (force-systemd-env-866940)     <acpi/>
	I1009 18:58:43.759686   54372 main.go:141] libmachine: (force-systemd-env-866940)     <apic/>
	I1009 18:58:43.759695   54372 main.go:141] libmachine: (force-systemd-env-866940)     <pae/>
	I1009 18:58:43.759700   54372 main.go:141] libmachine: (force-systemd-env-866940)   </features>
	I1009 18:58:43.759710   54372 main.go:141] libmachine: (force-systemd-env-866940)   <cpu mode='host-passthrough'>
	I1009 18:58:43.759720   54372 main.go:141] libmachine: (force-systemd-env-866940)   </cpu>
	I1009 18:58:43.759728   54372 main.go:141] libmachine: (force-systemd-env-866940)   <os>
	I1009 18:58:43.759738   54372 main.go:141] libmachine: (force-systemd-env-866940)     <type>hvm</type>
	I1009 18:58:43.759778   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='cdrom'/>
	I1009 18:58:43.759807   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='hd'/>
	I1009 18:58:43.759817   54372 main.go:141] libmachine: (force-systemd-env-866940)     <bootmenu enable='no'/>
	I1009 18:58:43.759827   54372 main.go:141] libmachine: (force-systemd-env-866940)   </os>
	I1009 18:58:43.759841   54372 main.go:141] libmachine: (force-systemd-env-866940)   <devices>
	I1009 18:58:43.759855   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='cdrom'>
	I1009 18:58:43.759874   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.759892   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.759904   54372 main.go:141] libmachine: (force-systemd-env-866940)       <readonly/>
	I1009 18:58:43.759917   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.759931   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='disk'>
	I1009 18:58:43.759949   54372 main.go:141] libmachine: (force-systemd-env-866940)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:58:43.759967   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.759981   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.759994   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.760007   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760019   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.760049   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760077   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760096   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760108   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='default'/>
	I1009 18:58:43.760115   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760124   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760134   54372 main.go:141] libmachine: (force-systemd-env-866940)     <serial type='pty'>
	I1009 18:58:43.760143   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target port='0'/>
	I1009 18:58:43.760157   54372 main.go:141] libmachine: (force-systemd-env-866940)     </serial>
	I1009 18:58:43.760170   54372 main.go:141] libmachine: (force-systemd-env-866940)     <console type='pty'>
	I1009 18:58:43.760181   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target type='serial' port='0'/>
	I1009 18:58:43.760193   54372 main.go:141] libmachine: (force-systemd-env-866940)     </console>
	I1009 18:58:43.760203   54372 main.go:141] libmachine: (force-systemd-env-866940)     <rng model='virtio'>
	I1009 18:58:43.760213   54372 main.go:141] libmachine: (force-systemd-env-866940)       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.760223   54372 main.go:141] libmachine: (force-systemd-env-866940)     </rng>
	I1009 18:58:43.760236   54372 main.go:141] libmachine: (force-systemd-env-866940)   </devices>
	I1009 18:58:43.760249   54372 main.go:141] libmachine: (force-systemd-env-866940) </domain>
	I1009 18:58:43.760272   54372 main.go:141] libmachine: (force-systemd-env-866940) 
	I1009 18:58:43.765904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:78:a8:f7 in network default
	I1009 18:58:43.766797   54372 main.go:141] libmachine: (force-systemd-env-866940) starting domain...
	I1009 18:58:43.766823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:3d:b9:89 in network mk-force-systemd-env-866940
	I1009 18:58:43.766833   54372 main.go:141] libmachine: (force-systemd-env-866940) ensuring networks are active...
	I1009 18:58:43.768013   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network default is active
	I1009 18:58:43.768563   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network mk-force-systemd-env-866940 is active
	I1009 18:58:43.769446   54372 main.go:141] libmachine: (force-systemd-env-866940) getting domain XML...
	I1009 18:58:43.770823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | starting domain XML:
	I1009 18:58:43.770904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <domain type='kvm'>
	I1009 18:58:43.770920   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>force-systemd-env-866940</name>
	I1009 18:58:43.770928   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>01280892-0a35-436e-8b77-3f763c9a68f6</uuid>
	I1009 18:58:43.770945   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 18:58:43.770952   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 18:58:43.770961   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:58:43.770967   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <os>
	I1009 18:58:43.770977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:58:43.770985   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='cdrom'/>
	I1009 18:58:43.770993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='hd'/>
	I1009 18:58:43.771001   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <bootmenu enable='no'/>
	I1009 18:58:43.771010   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </os>
	I1009 18:58:43.771017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <features>
	I1009 18:58:43.771059   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <acpi/>
	I1009 18:58:43.771083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <apic/>
	I1009 18:58:43.771099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <pae/>
	I1009 18:58:43.771107   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </features>
	I1009 18:58:43.771122   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:58:43.771131   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <clock offset='utc'/>
	I1009 18:58:43.771151   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:58:43.771163   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:58:43.771189   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_crash>destroy</on_crash>
	I1009 18:58:43.771268   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <devices>
	I1009 18:58:43.772871   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:58:43.772899   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='cdrom'>
	I1009 18:58:43.772910   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:58:43.772924   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.772932   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.772941   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <readonly/>
	I1009 18:58:43.772950   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:58:43.772958   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.772966   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='disk'>
	I1009 18:58:43.772977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:58:43.772991   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.773017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.773051   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:58:43.773065   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.773074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:58:43.773087   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:58:43.773095   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773108   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:58:43.773124   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:58:43.773138   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:58:43.773148   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773160   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773170   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:3d:b9:89'/>
	I1009 18:58:43.773183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.773193   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773207   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:58:43.773217   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773233   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773243   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:78:a8:f7'/>
	I1009 18:58:43.773260   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='default'/>
	I1009 18:58:43.773270   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773284   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:58:43.773293   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773305   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <serial type='pty'>
	I1009 18:58:43.773315   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='isa-serial' port='0'>
	I1009 18:58:43.773327   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |         <model name='isa-serial'/>
	I1009 18:58:43.773336   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       </target>
	I1009 18:58:43.773347   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </serial>
	I1009 18:58:43.773356   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <console type='pty'>
	I1009 18:58:43.773367   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='serial' port='0'/>
	I1009 18:58:43.773376   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </console>
	I1009 18:58:43.773388   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:58:43.773397   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:58:43.773409   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <audio id='1' type='none'/>
	I1009 18:58:43.773419   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <memballoon model='virtio'>
	I1009 18:58:43.773433   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:58:43.773442   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </memballoon>
	I1009 18:58:43.773450   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <rng model='virtio'>
	I1009 18:58:43.773459   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.773469   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:58:43.773476   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </rng>
	I1009 18:58:43.773503   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </devices>
	I1009 18:58:43.773510   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </domain>
	I1009 18:58:43.773521   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:44.696815   53754 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: (20.623250496s)
	W1009 18:58:44.696912   53754 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: Process exited with status 1
	stdout:
	62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80
	3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7
	6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b
	e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279
	b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963
	d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0
	
	stderr:
	E1009 18:58:44.690910    3543 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist" containerID="a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091"
	time="2025-10-09T18:58:44Z" level=fatal msg="stopping the container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist"
	I1009 18:58:44.697010   53754 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:58:44.749170   53754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:58:44.767682   53754 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  9 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Oct  9 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  9 18:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  9 18:57 /etc/kubernetes/scheduler.conf
	
	I1009 18:58:44.767749   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:58:44.781871   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:58:44.796528   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.796591   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:58:44.813206   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.829983   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.830071   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.847176   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:58:44.860411   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.860489   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:58:44.878975   53754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:58:44.899219   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:44.970605   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:40.378956   52475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:58:40.400540   52475 ssh_runner.go:195] Run: openssl version
	I1009 18:58:40.409075   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:58:40.424861   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430830   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430906   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.439375   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:58:40.456353   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15263.pem && ln -fs /usr/share/ca-certificates/15263.pem /etc/ssl/certs/15263.pem"
	I1009 18:58:40.470688   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476162   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:07 /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476231   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.483753   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15263.pem /etc/ssl/certs/51391683.0"
	I1009 18:58:40.496302   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152632.pem && ln -fs /usr/share/ca-certificates/152632.pem /etc/ssl/certs/152632.pem"
	I1009 18:58:40.513453   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519452   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:07 /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519520   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.527391   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152632.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:58:40.541953   52475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:58:40.548470   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:58:40.557767   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:58:40.565517   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:58:40.572929   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:58:40.580621   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:58:40.588071   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:58:40.597465   52475 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:40.597566   52475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:58:40.597631   52475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:58:40.642580   52475 cri.go:89] found id: "7b1bfb45a3eaace18d65de2587497b219bf6e3cd798d8c48e231bf1ad257e307"
	I1009 18:58:40.642607   52475 cri.go:89] found id: "20555dbc4eb6b0003b9e7120a568ec710f3b9cfc6a9dbc465b148e97555bf3d3"
	I1009 18:58:40.642612   52475 cri.go:89] found id: "5786f8dd0474b8a2ef87443eeee952136aadfd10370f92cf37e07541a02b70a5"
	I1009 18:58:40.642617   52475 cri.go:89] found id: "768cac5af370455dc385009f432c0d63f62e02688e116b2dec23e64f0894578b"
	I1009 18:58:40.642621   52475 cri.go:89] found id: "d3cfd4255a6edb3154603d5b3ff89b637d21671a133fcc83891af4f6e8a205c4"
	I1009 18:58:40.642624   52475 cri.go:89] found id: "252fc791a47bf2869efe267657a31dc52be38eae30346683b37a301f9ccb7490"
	I1009 18:58:40.642627   52475 cri.go:89] found id: "4593ed25c35b4d5c00b32b02fce74c71137e47c7a00fa840eb6effa737df9cf1"
	I1009 18:58:40.642629   52475 cri.go:89] found id: "3cc8ccc81072eaaa74daa572753c0a6a4c48f52fc71a6775c657b8c33f125b68"
	I1009 18:58:40.642632   52475 cri.go:89] found id: "c1d305c91f1ec6f697cc71695ff4555d0777627b35a9cb3a117ce4ac8070ead5"
	I1009 18:58:40.642639   52475 cri.go:89] found id: "19edec96082f50e67d6381b4cc16aa130713dd9bb9ac86be629415033f890dec"
	I1009 18:58:40.642642   52475 cri.go:89] found id: "ed26a33c61e3ffc9c91ce839a3b1b8244dd3f2f0c615041ef3194575deec434c"
	I1009 18:58:40.642644   52475 cri.go:89] found id: ""
	I1009 18:58:40.642687   52475 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-706613 -n pause-706613
helpers_test.go:269: (dbg) Run:  kubectl --context pause-706613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (74.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:91: Checking cache directory: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/linux/amd64/v0.0.0
no_kubernetes_test.go:100: Cache directory exists but is empty
no_kubernetes_test.go:102: Cache directory /home/jenkins/minikube-integration/21139-11352/.minikube/cache/linux/amd64/v0.0.0 should not exist when using --no-kubernetes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-156430 -n NoKubernetes-156430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-156430 -n NoKubernetes-156430: exit status 6 (300.696262ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:58:44.848015   54576 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-156430" does not appear in /home/jenkins/minikube-integration/21139-11352/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-156430 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cilium-421337                                                                                                                                                   │ cilium-421337             │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:54 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-852620    │ jenkins │ v1.32.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ -p kubernetes-upgrade-667994                                                                                                                                       │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ stopped-upgrade-644281 stop                                                                                                                                        │ stopped-upgrade-644281    │ jenkins │ v1.32.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p offline-crio-636274                                                                                                                                             │ offline-crio-636274       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-644281 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ delete  │ -p stopped-upgrade-644281                                                                                                                                          │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-852620 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p running-upgrade-852620                                                                                                                                          │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p NoKubernetes-156430                                                                                                                                             │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ ssh     │ force-systemd-flag-026602 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ delete  │ -p force-systemd-flag-026602                                                                                                                                       │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-866940  │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:58:29.883638   54372 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:58:29.883879   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.883887   54372 out.go:374] Setting ErrFile to fd 2...
	I1009 18:58:29.883891   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.884100   54372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:58:29.884605   54372 out.go:368] Setting JSON to false
	I1009 18:58:29.885504   54372 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6050,"bootTime":1760030260,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:58:29.885599   54372 start.go:141] virtualization: kvm guest
	I1009 18:58:29.887772   54372 out.go:179] * [force-systemd-env-866940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:58:29.888974   54372 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:58:29.888981   54372 notify.go:220] Checking for updates...
	I1009 18:58:29.891465   54372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:58:29.892648   54372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:58:29.894080   54372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:29.897419   54372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:58:29.898773   54372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 18:58:29.900598   54372 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:29.900732   54372 config.go:182] Loaded profile config "kubernetes-upgrade-667994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900867   54372 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900971   54372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:58:29.940183   54372 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:58:29.941515   54372 start.go:305] selected driver: kvm2
	I1009 18:58:29.941541   54372 start.go:925] validating driver "kvm2" against <nil>
	I1009 18:58:29.941585   54372 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:58:29.942359   54372 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.942453   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.957181   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.957221   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.972056   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.972112   54372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:58:29.972375   54372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:58:29.972403   54372 cni.go:84] Creating CNI manager for ""
	I1009 18:58:29.972459   54372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:29.972470   54372 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:58:29.972528   54372 start.go:349] cluster config:
	{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:29.972663   54372 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.975331   54372 out.go:179] * Starting "force-systemd-env-866940" primary control-plane node in "force-systemd-env-866940" cluster
	I1009 18:58:28.617442   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:28.618257   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:28.618287   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:28.618661   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:28.618712   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:28.618655   54090 retry.go:31] will retry after 2.048718205s: waiting for domain to come up
	I1009 18:58:30.668860   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:30.669683   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:30.669709   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:30.670246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:30.670315   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:30.670227   54090 retry.go:31] will retry after 2.480631133s: waiting for domain to come up
	I1009 18:58:29.976527   54372 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:29.976597   54372 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:58:29.976609   54372 cache.go:64] Caching tarball of preloaded images
	I1009 18:58:29.976714   54372 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:58:29.976727   54372 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:58:29.976837   54372 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json ...
	I1009 18:58:29.976863   54372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json: {Name:mk06f75730700c1e43a7f0f954227f6cc3fc181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:29.977073   54372 start.go:360] acquireMachinesLock for force-systemd-env-866940: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:58:33.154080   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:33.154827   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:33.154859   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:33.155143   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:33.155182   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:33.155136   54090 retry.go:31] will retry after 2.422416341s: waiting for domain to come up
	I1009 18:58:35.579641   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:35.580224   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:35.580246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:35.580606   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:35.580627   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:35.580578   54090 retry.go:31] will retry after 4.415560096s: waiting for domain to come up
	I1009 18:58:39.440597   52475 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.33579306s)
	I1009 18:58:39.440629   52475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:39.440689   52475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:39.447711   52475 start.go:563] Will wait 60s for crictl version
	I1009 18:58:39.447789   52475 ssh_runner.go:195] Run: which crictl
	I1009 18:58:39.452624   52475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:39.498411   52475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:39.498512   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.529885   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.562952   52475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:58:39.564260   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) Calling .GetIP
	I1009 18:58:39.567702   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568247   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:b2", ip: ""} in network mk-kubernetes-upgrade-667994: {Iface:virbr2 ExpiryTime:2025-10-09 19:56:09 +0000 UTC Type:0 Mac:52:54:00:cc:31:b2 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:kubernetes-upgrade-667994 Clientid:01:52:54:00:cc:31:b2}
	I1009 18:58:39.568281   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined IP address 192.168.50.153 and MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568540   52475 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 18:58:39.573413   52475 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:58:39.573502   52475 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:39.573544   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.623055   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.623085   52475 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:58:39.623145   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.660024   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.660066   52475 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:58:39.660076   52475 kubeadm.go:934] updating node { 192.168.50.153 8443 v1.34.1 crio true true} ...
	I1009 18:58:39.660192   52475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-667994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:58:39.660275   52475 ssh_runner.go:195] Run: crio config
	I1009 18:58:39.710960   52475 cni.go:84] Creating CNI manager for ""
	I1009 18:58:39.710994   52475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:39.711010   52475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:58:39.711045   52475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.153 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-667994 NodeName:kubernetes-upgrade-667994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:58:39.711182   52475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-667994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:58:39.711244   52475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:58:39.725217   52475 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:58:39.725285   52475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:58:39.737633   52475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1009 18:58:39.760544   52475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:58:39.782992   52475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1009 18:58:39.805524   52475 ssh_runner.go:195] Run: grep 192.168.50.153	control-plane.minikube.internal$ /etc/hosts
	I1009 18:58:39.810289   52475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:39.991987   52475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:58:40.016172   52475 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994 for IP: 192.168.50.153
	I1009 18:58:40.016192   52475 certs.go:195] generating shared ca certs ...
	I1009 18:58:40.016208   52475 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:40.016346   52475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:58:40.016383   52475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:58:40.016391   52475 certs.go:257] generating profile certs ...
	I1009 18:58:40.016478   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/client.key
	I1009 18:58:40.016524   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key.c1398b93
	I1009 18:58:40.016583   52475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key
	I1009 18:58:40.016710   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:58:40.016739   52475 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:58:40.016749   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:58:40.016772   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:58:40.016794   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:58:40.016815   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:58:40.016858   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:40.017397   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:58:40.049403   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:58:40.080884   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:58:40.112477   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:58:40.143864   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:58:40.176024   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:58:40.208362   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:58:40.239590   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:58:40.276018   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:58:40.313808   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:58:40.346195   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:58:42.551378   54372 start.go:364] duration metric: took 12.574251915s to acquireMachinesLock for "force-systemd-env-866940"
	I1009 18:58:42.551445   54372 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:58:42.551577   54372 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:58:39.998380   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999086   54061 main.go:141] libmachine: (NoKubernetes-156430) found domain IP: 192.168.61.10
	I1009 18:58:39.999111   54061 main.go:141] libmachine: (NoKubernetes-156430) reserving static IP address...
	I1009 18:58:39.999127   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has current primary IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-156430", mac: "52:54:00:35:84:5d", ip: "192.168.61.10"} in network mk-NoKubernetes-156430
	I1009 18:58:40.260566   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Getting to WaitForSSH function...
	I1009 18:58:40.260617   54061 main.go:141] libmachine: (NoKubernetes-156430) reserved static IP address 192.168.61.10 for domain NoKubernetes-156430
	I1009 18:58:40.260643   54061 main.go:141] libmachine: (NoKubernetes-156430) waiting for SSH...
	I1009 18:58:40.264626   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265277   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.265312   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265489   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH client type: external
	I1009 18:58:40.265523   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa (-rw-------)
	I1009 18:58:40.265550   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:58:40.265563   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | About to run SSH command:
	I1009 18:58:40.265575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | exit 0
	I1009 18:58:40.407821   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | SSH cmd err, output: <nil>: 
	I1009 18:58:40.408193   54061 main.go:141] libmachine: (NoKubernetes-156430) domain creation complete
	I1009 18:58:40.408590   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:40.409303   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409536   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409730   54061 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:58:40.409748   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetState
	I1009 18:58:40.411565   54061 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:58:40.411580   54061 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:58:40.411585   54061 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:58:40.411591   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.414834   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.415447   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415725   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.415952   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416137   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416345   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.416554   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.416871   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.416892   54061 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:58:40.536033   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.536091   54061 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:58:40.536103   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.539601   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540048   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.540083   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540284   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.540461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540600   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540759   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.540932   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.541175   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.541195   54061 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:58:40.668014   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 18:58:40.668182   54061 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:58:40.668202   54061 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:58:40.668214   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668487   54061 buildroot.go:166] provisioning hostname "NoKubernetes-156430"
	I1009 18:58:40.668527   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668825   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.672094   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672562   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.672591   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672839   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.673046   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673223   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.673543   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.673796   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.673811   54061 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-156430 && echo "NoKubernetes-156430" | sudo tee /etc/hostname
	I1009 18:58:40.814131   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-156430
	
	I1009 18:58:40.814166   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.817973   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818494   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.818575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818776   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.819070   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819272   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819482   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.819704   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.819912   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.819928   54061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-156430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-156430/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-156430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:58:40.960331   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.960360   54061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:58:40.960384   54061 buildroot.go:174] setting up certificates
	I1009 18:58:40.960401   54061 provision.go:84] configureAuth start
	I1009 18:58:40.960415   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.960761   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:40.964382   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.964921   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.964954   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.965178   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.968310   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.968870   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.968919   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.969111   54061 provision.go:143] copyHostCerts
	I1009 18:58:40.969145   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969181   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:58:40.969197   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969271   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:58:40.969374   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969393   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:58:40.969398   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969425   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:58:40.969504   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969533   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:58:40.969543   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969586   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:58:40.969702   54061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-156430 san=[127.0.0.1 192.168.61.10 NoKubernetes-156430 localhost minikube]
	I1009 18:58:41.825514   54061 provision.go:177] copyRemoteCerts
	I1009 18:58:41.825595   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:58:41.825625   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:41.828960   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:41.829483   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829699   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:41.829890   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:41.830096   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:41.830253   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:41.925362   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:58:41.925436   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:58:41.956804   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:58:41.956924   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:58:41.989131   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:58:41.989205   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:58:42.020058   54061 provision.go:87] duration metric: took 1.059626183s to configureAuth
	I1009 18:58:42.020089   54061 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:58:42.020303   54061 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:42.020385   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.024034   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.024450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024676   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.024865   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025026   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025234   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.025433   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.025638   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.025653   54061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:58:42.274423   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:58:42.274451   54061 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:58:42.274461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetURL
	I1009 18:58:42.275927   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | using libvirt version 8000000
	I1009 18:58:42.278858   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279256   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.279289   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279476   54061 main.go:141] libmachine: Docker is up and running!
	I1009 18:58:42.279492   54061 main.go:141] libmachine: Reticulating splines...
	I1009 18:58:42.279499   54061 client.go:171] duration metric: took 22.713284182s to LocalClient.Create
	I1009 18:58:42.279522   54061 start.go:167] duration metric: took 22.713359926s to libmachine.API.Create "NoKubernetes-156430"
	I1009 18:58:42.279548   54061 start.go:293] postStartSetup for "NoKubernetes-156430" (driver="kvm2")
	I1009 18:58:42.279558   54061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:58:42.279578   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.279814   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:58:42.279845   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.282285   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282640   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.282674   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282798   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.282976   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.283169   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.283296   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.373337   54061 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:58:42.378514   54061 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:58:42.378548   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:58:42.378618   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:58:42.378713   54061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:58:42.378732   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> /etc/ssl/certs/152632.pem
	I1009 18:58:42.378881   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:58:42.391375   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:42.422367   54061 start.go:296] duration metric: took 142.804384ms for postStartSetup
	I1009 18:58:42.422479   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:42.423258   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.426192   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426499   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.426529   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426863   54061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/NoKubernetes-156430/config.json ...
	I1009 18:58:42.427143   54061 start.go:128] duration metric: took 22.88324393s to createHost
	I1009 18:58:42.427175   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.429891   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430321   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.430350   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430554   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.430735   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.430866   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.431027   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.431224   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.431461   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.431473   54061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:58:42.551194   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760036322.526817929
	
	I1009 18:58:42.551223   54061 fix.go:216] guest clock: 1760036322.526817929
	I1009 18:58:42.551235   54061 fix.go:229] Guest: 2025-10-09 18:58:42.526817929 +0000 UTC Remote: 2025-10-09 18:58:42.427160398 +0000 UTC m=+24.708548246 (delta=99.657531ms)
	I1009 18:58:42.551280   54061 fix.go:200] guest clock delta is within tolerance: 99.657531ms
	I1009 18:58:42.551289   54061 start.go:83] releasing machines lock for "NoKubernetes-156430", held for 23.007526235s
	I1009 18:58:42.551317   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.551599   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.555353   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.555871   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.555908   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.556160   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556731   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556904   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556998   54061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:58:42.557069   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.557138   54061 ssh_runner.go:195] Run: cat /version.json
	I1009 18:58:42.557165   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.560586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.560975   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561008   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561033   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561193   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.561393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.561594   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.561636   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561916   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.562392   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.562797   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.563244   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.563412   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.563532   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.687591   54061 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:42.696846   54061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:58:42.860249   54061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:58:42.867451   54061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:58:42.867517   54061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:58:42.897113   54061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:58:42.897141   54061 start.go:495] detecting cgroup driver to use...
	I1009 18:58:42.897220   54061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:58:42.919672   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:58:42.942589   54061 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:58:42.942699   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:58:42.965057   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:58:42.983975   54061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:58:43.208244   54061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:58:43.409851   54061 docker.go:234] disabling docker service ...
	I1009 18:58:43.409937   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:58:43.431496   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:58:43.449349   54061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:58:43.713575   54061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:58:43.917104   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:58:43.940403   54061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:58:43.966987   54061 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:58:43.967054   54061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 18:58:43.967114   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.985635   54061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:58:43.985708   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.999934   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.014371   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.031915   54061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:58:44.047615   54061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:58:44.060030   54061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:58:44.060125   54061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:58:44.088348   54061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:58:44.105749   54061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:44.276388   54061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:58:44.400747   54061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:44.400833   54061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:44.408292   54061 start.go:563] Will wait 60s for crictl version
	I1009 18:58:44.408361   54061 ssh_runner.go:195] Run: which crictl
	I1009 18:58:44.413380   54061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:44.465676   54061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:44.465768   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.505682   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.550424   54061 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1009 18:58:44.551824   54061 ssh_runner.go:195] Run: rm -f paused
	I1009 18:58:44.558855   54061 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:58:44.562268   54061 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:58:42.553872   54372 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 18:58:42.554150   54372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:42.554213   54372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:42.573562   54372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1009 18:58:42.574189   54372 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:42.574878   54372 main.go:141] libmachine: Using API Version  1
	I1009 18:58:42.574909   54372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:42.575408   54372 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:42.575629   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetMachineName
	I1009 18:58:42.575811   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .DriverName
	I1009 18:58:42.575965   54372 start.go:159] libmachine.API.Create for "force-systemd-env-866940" (driver="kvm2")
	I1009 18:58:42.575996   54372 client.go:168] LocalClient.Create starting
	I1009 18:58:42.576048   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem
	I1009 18:58:42.576104   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576129   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576200   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem
	I1009 18:58:42.576230   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576251   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576284   54372 main.go:141] libmachine: Running pre-create checks...
	I1009 18:58:42.576307   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .PreCreateCheck
	I1009 18:58:42.576640   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetConfigRaw
	I1009 18:58:42.577094   54372 main.go:141] libmachine: Creating machine...
	I1009 18:58:42.577109   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .Create
	I1009 18:58:42.577271   54372 main.go:141] libmachine: (force-systemd-env-866940) creating domain...
	I1009 18:58:42.577292   54372 main.go:141] libmachine: (force-systemd-env-866940) creating network...
	I1009 18:58:42.578684   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | found existing default network
	I1009 18:58:42.578863   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network connections='3'>
	I1009 18:58:42.578882   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>default</name>
	I1009 18:58:42.578894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 18:58:42.578906   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <forward mode='nat'>
	I1009 18:58:42.578936   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <nat>
	I1009 18:58:42.578959   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <port start='1024' end='65535'/>
	I1009 18:58:42.578972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </nat>
	I1009 18:58:42.578983   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </forward>
	I1009 18:58:42.578993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 18:58:42.579013   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 18:58:42.579030   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 18:58:42.579055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.579074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 18:58:42.579083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.579091   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.579099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.579106   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.579960   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.579788   54509 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:eb:8e} reservation:<nil>}
	I1009 18:58:42.580630   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.580543   54509 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:2a:69} reservation:<nil>}
	I1009 18:58:42.581452   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.581375   54509 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:cd:a4} reservation:<nil>}
	I1009 18:58:42.582428   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.582299   54509 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003429c0}
	I1009 18:58:42.582456   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | defining private network:
	I1009 18:58:42.582477   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.582489   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.582499   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.582514   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.582525   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.582535   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.582546   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.582560   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.582572   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.582579   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.582591   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.588855   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | creating private network mk-force-systemd-env-866940 192.168.72.0/24...
	I1009 18:58:42.674549   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | private network mk-force-systemd-env-866940 192.168.72.0/24 created
	I1009 18:58:42.674894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.674926   54372 main.go:141] libmachine: (force-systemd-env-866940) setting up store path in /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:42.674935   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.674947   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>e017ca39-b131-46c7-8a35-2b8acbb67618</uuid>
	I1009 18:58:42.674955   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1009 18:58:42.674964   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:e1:bc:8c'/>
	I1009 18:58:42.674976   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.674986   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.675005   54372 main.go:141] libmachine: (force-systemd-env-866940) building disk image from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:58:42.675015   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.675024   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.675033   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.675055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.675096   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.675125   54372 main.go:141] libmachine: (force-systemd-env-866940) Downloading /home/jenkins/minikube-integration/21139-11352/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 18:58:42.675139   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.675179   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.674877   54509 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:42.935427   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.935240   54509 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/id_rsa...
	I1009 18:58:43.757919   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757713   54509 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk...
	I1009 18:58:43.757972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing magic tar header
	I1009 18:58:43.757993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing SSH key tar header
	I1009 18:58:43.758008   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757830   54509 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:43.758027   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 (perms=drwx------)
	I1009 18:58:43.758063   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:58:43.758078   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube (perms=drwxr-xr-x)
	I1009 18:58:43.758093   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940
	I1009 18:58:43.758110   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines
	I1009 18:58:43.758123   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:43.758144   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352
	I1009 18:58:43.758157   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:58:43.758172   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352 (perms=drwxrwxr-x)
	I1009 18:58:43.758183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins
	I1009 18:58:43.758195   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home
	I1009 18:58:43.758208   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:58:43.758221   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:58:43.758239   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain...
	I1009 18:58:43.758248   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | skipping /home - not owner
	I1009 18:58:43.759588   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain using XML: 
	I1009 18:58:43.759617   54372 main.go:141] libmachine: (force-systemd-env-866940) <domain type='kvm'>
	I1009 18:58:43.759630   54372 main.go:141] libmachine: (force-systemd-env-866940)   <name>force-systemd-env-866940</name>
	I1009 18:58:43.759642   54372 main.go:141] libmachine: (force-systemd-env-866940)   <memory unit='MiB'>3072</memory>
	I1009 18:58:43.759656   54372 main.go:141] libmachine: (force-systemd-env-866940)   <vcpu>2</vcpu>
	I1009 18:58:43.759667   54372 main.go:141] libmachine: (force-systemd-env-866940)   <features>
	I1009 18:58:43.759680   54372 main.go:141] libmachine: (force-systemd-env-866940)     <acpi/>
	I1009 18:58:43.759686   54372 main.go:141] libmachine: (force-systemd-env-866940)     <apic/>
	I1009 18:58:43.759695   54372 main.go:141] libmachine: (force-systemd-env-866940)     <pae/>
	I1009 18:58:43.759700   54372 main.go:141] libmachine: (force-systemd-env-866940)   </features>
	I1009 18:58:43.759710   54372 main.go:141] libmachine: (force-systemd-env-866940)   <cpu mode='host-passthrough'>
	I1009 18:58:43.759720   54372 main.go:141] libmachine: (force-systemd-env-866940)   </cpu>
	I1009 18:58:43.759728   54372 main.go:141] libmachine: (force-systemd-env-866940)   <os>
	I1009 18:58:43.759738   54372 main.go:141] libmachine: (force-systemd-env-866940)     <type>hvm</type>
	I1009 18:58:43.759778   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='cdrom'/>
	I1009 18:58:43.759807   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='hd'/>
	I1009 18:58:43.759817   54372 main.go:141] libmachine: (force-systemd-env-866940)     <bootmenu enable='no'/>
	I1009 18:58:43.759827   54372 main.go:141] libmachine: (force-systemd-env-866940)   </os>
	I1009 18:58:43.759841   54372 main.go:141] libmachine: (force-systemd-env-866940)   <devices>
	I1009 18:58:43.759855   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='cdrom'>
	I1009 18:58:43.759874   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.759892   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.759904   54372 main.go:141] libmachine: (force-systemd-env-866940)       <readonly/>
	I1009 18:58:43.759917   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.759931   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='disk'>
	I1009 18:58:43.759949   54372 main.go:141] libmachine: (force-systemd-env-866940)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:58:43.759967   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.759981   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.759994   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.760007   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760019   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.760049   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760077   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760096   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760108   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='default'/>
	I1009 18:58:43.760115   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760124   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760134   54372 main.go:141] libmachine: (force-systemd-env-866940)     <serial type='pty'>
	I1009 18:58:43.760143   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target port='0'/>
	I1009 18:58:43.760157   54372 main.go:141] libmachine: (force-systemd-env-866940)     </serial>
	I1009 18:58:43.760170   54372 main.go:141] libmachine: (force-systemd-env-866940)     <console type='pty'>
	I1009 18:58:43.760181   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target type='serial' port='0'/>
	I1009 18:58:43.760193   54372 main.go:141] libmachine: (force-systemd-env-866940)     </console>
	I1009 18:58:43.760203   54372 main.go:141] libmachine: (force-systemd-env-866940)     <rng model='virtio'>
	I1009 18:58:43.760213   54372 main.go:141] libmachine: (force-systemd-env-866940)       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.760223   54372 main.go:141] libmachine: (force-systemd-env-866940)     </rng>
	I1009 18:58:43.760236   54372 main.go:141] libmachine: (force-systemd-env-866940)   </devices>
	I1009 18:58:43.760249   54372 main.go:141] libmachine: (force-systemd-env-866940) </domain>
	I1009 18:58:43.760272   54372 main.go:141] libmachine: (force-systemd-env-866940) 
	I1009 18:58:43.765904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:78:a8:f7 in network default
	I1009 18:58:43.766797   54372 main.go:141] libmachine: (force-systemd-env-866940) starting domain...
	I1009 18:58:43.766823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:3d:b9:89 in network mk-force-systemd-env-866940
	I1009 18:58:43.766833   54372 main.go:141] libmachine: (force-systemd-env-866940) ensuring networks are active...
	I1009 18:58:43.768013   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network default is active
	I1009 18:58:43.768563   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network mk-force-systemd-env-866940 is active
	I1009 18:58:43.769446   54372 main.go:141] libmachine: (force-systemd-env-866940) getting domain XML...
	I1009 18:58:43.770823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | starting domain XML:
	I1009 18:58:43.770904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <domain type='kvm'>
	I1009 18:58:43.770920   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>force-systemd-env-866940</name>
	I1009 18:58:43.770928   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>01280892-0a35-436e-8b77-3f763c9a68f6</uuid>
	I1009 18:58:43.770945   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 18:58:43.770952   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 18:58:43.770961   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:58:43.770967   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <os>
	I1009 18:58:43.770977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:58:43.770985   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='cdrom'/>
	I1009 18:58:43.770993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='hd'/>
	I1009 18:58:43.771001   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <bootmenu enable='no'/>
	I1009 18:58:43.771010   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </os>
	I1009 18:58:43.771017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <features>
	I1009 18:58:43.771059   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <acpi/>
	I1009 18:58:43.771083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <apic/>
	I1009 18:58:43.771099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <pae/>
	I1009 18:58:43.771107   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </features>
	I1009 18:58:43.771122   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:58:43.771131   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <clock offset='utc'/>
	I1009 18:58:43.771151   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:58:43.771163   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:58:43.771189   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_crash>destroy</on_crash>
	I1009 18:58:43.771268   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <devices>
	I1009 18:58:43.772871   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:58:43.772899   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='cdrom'>
	I1009 18:58:43.772910   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:58:43.772924   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.772932   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.772941   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <readonly/>
	I1009 18:58:43.772950   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:58:43.772958   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.772966   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='disk'>
	I1009 18:58:43.772977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:58:43.772991   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.773017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.773051   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:58:43.773065   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.773074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:58:43.773087   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:58:43.773095   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773108   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:58:43.773124   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:58:43.773138   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:58:43.773148   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773160   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773170   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:3d:b9:89'/>
	I1009 18:58:43.773183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.773193   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773207   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:58:43.773217   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773233   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773243   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:78:a8:f7'/>
	I1009 18:58:43.773260   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='default'/>
	I1009 18:58:43.773270   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773284   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:58:43.773293   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773305   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <serial type='pty'>
	I1009 18:58:43.773315   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='isa-serial' port='0'>
	I1009 18:58:43.773327   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |         <model name='isa-serial'/>
	I1009 18:58:43.773336   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       </target>
	I1009 18:58:43.773347   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </serial>
	I1009 18:58:43.773356   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <console type='pty'>
	I1009 18:58:43.773367   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='serial' port='0'/>
	I1009 18:58:43.773376   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </console>
	I1009 18:58:43.773388   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:58:43.773397   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:58:43.773409   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <audio id='1' type='none'/>
	I1009 18:58:43.773419   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <memballoon model='virtio'>
	I1009 18:58:43.773433   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:58:43.773442   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </memballoon>
	I1009 18:58:43.773450   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <rng model='virtio'>
	I1009 18:58:43.773459   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.773469   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:58:43.773476   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </rng>
	I1009 18:58:43.773503   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </devices>
	I1009 18:58:43.773510   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </domain>
	I1009 18:58:43.773521   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	
	
	==> CRI-O <==
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.281969164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760036325281912777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f489149d-3067-4606-883a-5c830bc2455a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.282831415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05985b46-f66b-4ffe-82a4-5bb7cb2e751d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.282904971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05985b46-f66b-4ffe-82a4-5bb7cb2e751d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.282987315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=05985b46-f66b-4ffe-82a4-5bb7cb2e751d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.325183557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ecc9d1a-104a-4652-8a24-e66aacbef4ef name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.325403779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ecc9d1a-104a-4652-8a24-e66aacbef4ef name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.326978309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10a3251b-f02e-4f29-9e78-5790db8bc960 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.327088689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760036325327069081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10a3251b-f02e-4f29-9e78-5790db8bc960 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.328744316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0df1339-cadd-427b-bf4b-8585181503ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.328846937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0df1339-cadd-427b-bf4b-8585181503ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.328956836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0df1339-cadd-427b-bf4b-8585181503ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.371587057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecf2de93-1cc6-4b56-a539-121ab6fe4f54 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.371743941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecf2de93-1cc6-4b56-a539-121ab6fe4f54 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.373707472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03568781-da2a-4db2-9ab1-a06d32e78019 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.373877373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760036325373849468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03568781-da2a-4db2-9ab1-a06d32e78019 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.374894159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=209ed35e-8c4f-440b-b4ac-9e871046309e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.374985085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=209ed35e-8c4f-440b-b4ac-9e871046309e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.375035529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=209ed35e-8c4f-440b-b4ac-9e871046309e name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.416304808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea651c78-b125-4a06-95ff-fe78a498e779 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.416472490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea651c78-b125-4a06-95ff-fe78a498e779 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.417725529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3799613-9368-4da7-9733-71dd7c6b7595 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.417842045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760036325417820126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3799613-9368-4da7-9733-71dd7c6b7595 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.418249967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e39a9e31-81c6-4d3e-8e00-ddbe846ae516 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.418309343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e39a9e31-81c6-4d3e-8e00-ddbe846ae516 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:58:45 NoKubernetes-156430 crio[823]: time="2025-10-09 18:58:45.418414359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e39a9e31-81c6-4d3e-8e00-ddbe846ae516 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 9 18:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002792] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184215] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107244] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 18:58:45 up 0 min,  0 users,  load average: 0.19, 0.05, 0.01
	Linux NoKubernetes-156430 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-156430 -n NoKubernetes-156430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-156430 -n NoKubernetes-156430: exit status 6 (280.536216ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:58:45.887481   54655 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-156430" does not appear in /home/jenkins/minikube-integration/21139-11352/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-156430" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-156430 -n NoKubernetes-156430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-156430 -n NoKubernetes-156430: exit status 6 (308.875466ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:58:46.195985   54682 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-156430" does not appear in /home/jenkins/minikube-integration/21139-11352/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-156430 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cilium-421337                                                                                                                                                   │ cilium-421337             │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:54 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-852620    │ jenkins │ v1.32.0 │ 09 Oct 25 18:54 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ -p kubernetes-upgrade-667994                                                                                                                                       │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:56 UTC │
	│ stop    │ stopped-upgrade-644281 stop                                                                                                                                        │ stopped-upgrade-644281    │ jenkins │ v1.32.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ delete  │ -p offline-crio-636274                                                                                                                                             │ offline-crio-636274       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-667994 │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-644281 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ delete  │ -p stopped-upgrade-644281                                                                                                                                          │ stopped-upgrade-644281    │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:56 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │                     │
	│ start   │ -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:56 UTC │ 09 Oct 25 18:57 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-852620 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p running-upgrade-852620                                                                                                                                          │ running-upgrade-852620    │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:57 UTC │
	│ start   │ -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false              │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p pause-706613 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-706613              │ jenkins │ v1.37.0 │ 09 Oct 25 18:57 UTC │                     │
	│ delete  │ -p NoKubernetes-156430                                                                                                                                             │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-156430       │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ ssh     │ force-systemd-flag-026602 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                               │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ delete  │ -p force-systemd-flag-026602                                                                                                                                       │ force-systemd-flag-026602 │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │ 09 Oct 25 18:58 UTC │
	│ start   │ -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-866940  │ jenkins │ v1.37.0 │ 09 Oct 25 18:58 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:58:29.883638   54372 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:58:29.883879   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.883887   54372 out.go:374] Setting ErrFile to fd 2...
	I1009 18:58:29.883891   54372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:29.884100   54372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:58:29.884605   54372 out.go:368] Setting JSON to false
	I1009 18:58:29.885504   54372 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6050,"bootTime":1760030260,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:58:29.885599   54372 start.go:141] virtualization: kvm guest
	I1009 18:58:29.887772   54372 out.go:179] * [force-systemd-env-866940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:58:29.888974   54372 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:58:29.888981   54372 notify.go:220] Checking for updates...
	I1009 18:58:29.891465   54372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:58:29.892648   54372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:58:29.894080   54372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:29.897419   54372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:58:29.898773   54372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 18:58:29.900598   54372 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:29.900732   54372 config.go:182] Loaded profile config "kubernetes-upgrade-667994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900867   54372 config.go:182] Loaded profile config "pause-706613": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:29.900971   54372 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:58:29.940183   54372 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:58:29.941515   54372 start.go:305] selected driver: kvm2
	I1009 18:58:29.941541   54372 start.go:925] validating driver "kvm2" against <nil>
	I1009 18:58:29.941585   54372 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:58:29.942359   54372 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.942453   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.957181   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.957221   54372 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:58:29.972056   54372 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:58:29.972112   54372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:58:29.972375   54372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:58:29.972403   54372 cni.go:84] Creating CNI manager for ""
	I1009 18:58:29.972459   54372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:29.972470   54372 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:58:29.972528   54372 start.go:349] cluster config:
	{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:29.972663   54372 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:58:29.975331   54372 out.go:179] * Starting "force-systemd-env-866940" primary control-plane node in "force-systemd-env-866940" cluster
	I1009 18:58:28.617442   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:28.618257   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:28.618287   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:28.618661   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:28.618712   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:28.618655   54090 retry.go:31] will retry after 2.048718205s: waiting for domain to come up
	I1009 18:58:30.668860   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:30.669683   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:30.669709   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:30.670246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:30.670315   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:30.670227   54090 retry.go:31] will retry after 2.480631133s: waiting for domain to come up
	I1009 18:58:29.976527   54372 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:29.976597   54372 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:58:29.976609   54372 cache.go:64] Caching tarball of preloaded images
	I1009 18:58:29.976714   54372 preload.go:238] Found /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:58:29.976727   54372 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:58:29.976837   54372 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json ...
	I1009 18:58:29.976863   54372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/force-systemd-env-866940/config.json: {Name:mk06f75730700c1e43a7f0f954227f6cc3fc181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:29.977073   54372 start.go:360] acquireMachinesLock for force-systemd-env-866940: {Name:mk84f34bbcdd84278c297cd43c14b8854625411b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:58:33.154080   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:33.154827   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:33.154859   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:33.155143   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:33.155182   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:33.155136   54090 retry.go:31] will retry after 2.422416341s: waiting for domain to come up
	I1009 18:58:35.579641   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:35.580224   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | no network interface addresses found for domain NoKubernetes-156430 (source=lease)
	I1009 18:58:35.580246   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | trying to list again with source=arp
	I1009 18:58:35.580606   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find current IP address of domain NoKubernetes-156430 in network mk-NoKubernetes-156430 (interfaces detected: [])
	I1009 18:58:35.580627   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | I1009 18:58:35.580578   54090 retry.go:31] will retry after 4.415560096s: waiting for domain to come up
	I1009 18:58:39.440597   52475 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.33579306s)
	I1009 18:58:39.440629   52475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:39.440689   52475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:39.447711   52475 start.go:563] Will wait 60s for crictl version
	I1009 18:58:39.447789   52475 ssh_runner.go:195] Run: which crictl
	I1009 18:58:39.452624   52475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:39.498411   52475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:39.498512   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.529885   52475 ssh_runner.go:195] Run: crio --version
	I1009 18:58:39.562952   52475 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:58:39.564260   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) Calling .GetIP
	I1009 18:58:39.567702   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568247   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:b2", ip: ""} in network mk-kubernetes-upgrade-667994: {Iface:virbr2 ExpiryTime:2025-10-09 19:56:09 +0000 UTC Type:0 Mac:52:54:00:cc:31:b2 Iaid: IPaddr:192.168.50.153 Prefix:24 Hostname:kubernetes-upgrade-667994 Clientid:01:52:54:00:cc:31:b2}
	I1009 18:58:39.568281   52475 main.go:141] libmachine: (kubernetes-upgrade-667994) DBG | domain kubernetes-upgrade-667994 has defined IP address 192.168.50.153 and MAC address 52:54:00:cc:31:b2 in network mk-kubernetes-upgrade-667994
	I1009 18:58:39.568540   52475 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 18:58:39.573413   52475 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:58:39.573502   52475 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:58:39.573544   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.623055   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.623085   52475 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:58:39.623145   52475 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:58:39.660024   52475 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:58:39.660066   52475 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:58:39.660076   52475 kubeadm.go:934] updating node { 192.168.50.153 8443 v1.34.1 crio true true} ...
	I1009 18:58:39.660192   52475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-667994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:58:39.660275   52475 ssh_runner.go:195] Run: crio config
	I1009 18:58:39.710960   52475 cni.go:84] Creating CNI manager for ""
	I1009 18:58:39.710994   52475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:58:39.711010   52475 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:58:39.711045   52475 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.153 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-667994 NodeName:kubernetes-upgrade-667994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:58:39.711182   52475 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-667994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:58:39.711244   52475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:58:39.725217   52475 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:58:39.725285   52475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:58:39.737633   52475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1009 18:58:39.760544   52475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:58:39.782992   52475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1009 18:58:39.805524   52475 ssh_runner.go:195] Run: grep 192.168.50.153	control-plane.minikube.internal$ /etc/hosts
	I1009 18:58:39.810289   52475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:39.991987   52475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:58:40.016172   52475 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994 for IP: 192.168.50.153
	I1009 18:58:40.016192   52475 certs.go:195] generating shared ca certs ...
	I1009 18:58:40.016208   52475 certs.go:227] acquiring lock for ca certs: {Name:mkabdf8f7a0a4430df5e49c3a8899ada46abda15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:58:40.016346   52475 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key
	I1009 18:58:40.016383   52475 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key
	I1009 18:58:40.016391   52475 certs.go:257] generating profile certs ...
	I1009 18:58:40.016478   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/client.key
	I1009 18:58:40.016524   52475 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key.c1398b93
	I1009 18:58:40.016583   52475 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key
	I1009 18:58:40.016710   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem (1338 bytes)
	W1009 18:58:40.016739   52475 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263_empty.pem, impossibly tiny 0 bytes
	I1009 18:58:40.016749   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:58:40.016772   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:58:40.016794   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:58:40.016815   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem (1675 bytes)
	I1009 18:58:40.016858   52475 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:40.017397   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:58:40.049403   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:58:40.080884   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:58:40.112477   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:58:40.143864   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 18:58:40.176024   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:58:40.208362   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:58:40.239590   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kubernetes-upgrade-667994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:58:40.276018   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:58:40.313808   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/15263.pem --> /usr/share/ca-certificates/15263.pem (1338 bytes)
	I1009 18:58:40.346195   52475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /usr/share/ca-certificates/152632.pem (1708 bytes)
	I1009 18:58:42.551378   54372 start.go:364] duration metric: took 12.574251915s to acquireMachinesLock for "force-systemd-env-866940"
	I1009 18:58:42.551445   54372 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-866940 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-866940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:58:42.551577   54372 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:58:39.998380   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999086   54061 main.go:141] libmachine: (NoKubernetes-156430) found domain IP: 192.168.61.10
	I1009 18:58:39.999111   54061 main.go:141] libmachine: (NoKubernetes-156430) reserving static IP address...
	I1009 18:58:39.999127   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has current primary IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:39.999586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-156430", mac: "52:54:00:35:84:5d", ip: "192.168.61.10"} in network mk-NoKubernetes-156430
	I1009 18:58:40.260566   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Getting to WaitForSSH function...
	I1009 18:58:40.260617   54061 main.go:141] libmachine: (NoKubernetes-156430) reserved static IP address 192.168.61.10 for domain NoKubernetes-156430
	I1009 18:58:40.260643   54061 main.go:141] libmachine: (NoKubernetes-156430) waiting for SSH...
	I1009 18:58:40.264626   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265277   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.265312   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.265489   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH client type: external
	I1009 18:58:40.265523   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | Using SSH private key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa (-rw-------)
	I1009 18:58:40.265550   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:58:40.265563   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | About to run SSH command:
	I1009 18:58:40.265575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | exit 0
	I1009 18:58:40.407821   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | SSH cmd err, output: <nil>: 
	I1009 18:58:40.408193   54061 main.go:141] libmachine: (NoKubernetes-156430) domain creation complete
	I1009 18:58:40.408590   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:40.409303   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409536   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:40.409730   54061 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:58:40.409748   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetState
	I1009 18:58:40.411565   54061 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:58:40.411580   54061 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:58:40.411585   54061 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:58:40.411591   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.414834   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.415447   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.415725   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.415952   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416137   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.416345   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.416554   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.416871   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.416892   54061 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:58:40.536033   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.536091   54061 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:58:40.536103   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.539601   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540048   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.540083   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.540284   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.540461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540600   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.540759   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.540932   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.541175   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.541195   54061 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:58:40.668014   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 18:58:40.668182   54061 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:58:40.668202   54061 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:58:40.668214   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668487   54061 buildroot.go:166] provisioning hostname "NoKubernetes-156430"
	I1009 18:58:40.668527   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.668825   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.672094   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672562   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.672591   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.672839   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.673046   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673223   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.673393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.673543   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.673796   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.673811   54061 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-156430 && echo "NoKubernetes-156430" | sudo tee /etc/hostname
	I1009 18:58:40.814131   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-156430
	
	I1009 18:58:40.814166   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.817973   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818494   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.818575   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.818776   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:40.819070   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819272   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:40.819482   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:40.819704   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:40.819912   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:40.819928   54061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-156430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-156430/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-156430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:58:40.960331   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:58:40.960360   54061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11352/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11352/.minikube}
	I1009 18:58:40.960384   54061 buildroot.go:174] setting up certificates
	I1009 18:58:40.960401   54061 provision.go:84] configureAuth start
	I1009 18:58:40.960415   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetMachineName
	I1009 18:58:40.960761   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:40.964382   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.964921   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.964954   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.965178   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:40.968310   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.968870   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:40.968919   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:40.969111   54061 provision.go:143] copyHostCerts
	I1009 18:58:40.969145   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969181   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem, removing ...
	I1009 18:58:40.969197   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem
	I1009 18:58:40.969271   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/ca.pem (1078 bytes)
	I1009 18:58:40.969374   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969393   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem, removing ...
	I1009 18:58:40.969398   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem
	I1009 18:58:40.969425   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/cert.pem (1123 bytes)
	I1009 18:58:40.969504   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969533   54061 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem, removing ...
	I1009 18:58:40.969543   54061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem
	I1009 18:58:40.969586   54061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11352/.minikube/key.pem (1675 bytes)
	I1009 18:58:40.969702   54061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-156430 san=[127.0.0.1 192.168.61.10 NoKubernetes-156430 localhost minikube]
	I1009 18:58:41.825514   54061 provision.go:177] copyRemoteCerts
	I1009 18:58:41.825595   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:58:41.825625   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:41.828960   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:41.829483   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:41.829699   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:41.829890   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:41.830096   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:41.830253   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:41.925362   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:58:41.925436   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:58:41.956804   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:58:41.956924   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:58:41.989131   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:58:41.989205   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 18:58:42.020058   54061 provision.go:87] duration metric: took 1.059626183s to configureAuth
	I1009 18:58:42.020089   54061 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:58:42.020303   54061 config.go:182] Loaded profile config "NoKubernetes-156430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 18:58:42.020385   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.024034   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024417   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.024450   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.024676   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.024865   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025026   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.025234   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.025433   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.025638   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.025653   54061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:58:42.274423   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:58:42.274451   54061 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:58:42.274461   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetURL
	I1009 18:58:42.275927   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | using libvirt version 8000000
	I1009 18:58:42.278858   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279256   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.279289   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.279476   54061 main.go:141] libmachine: Docker is up and running!
	I1009 18:58:42.279492   54061 main.go:141] libmachine: Reticulating splines...
	I1009 18:58:42.279499   54061 client.go:171] duration metric: took 22.713284182s to LocalClient.Create
	I1009 18:58:42.279522   54061 start.go:167] duration metric: took 22.713359926s to libmachine.API.Create "NoKubernetes-156430"
	I1009 18:58:42.279548   54061 start.go:293] postStartSetup for "NoKubernetes-156430" (driver="kvm2")
	I1009 18:58:42.279558   54061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:58:42.279578   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.279814   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:58:42.279845   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.282285   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282640   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.282674   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.282798   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.282976   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.283169   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.283296   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.373337   54061 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:58:42.378514   54061 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:58:42.378548   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/addons for local assets ...
	I1009 18:58:42.378618   54061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11352/.minikube/files for local assets ...
	I1009 18:58:42.378713   54061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> 152632.pem in /etc/ssl/certs
	I1009 18:58:42.378732   54061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem -> /etc/ssl/certs/152632.pem
	I1009 18:58:42.378881   54061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:58:42.391375   54061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/ssl/certs/152632.pem --> /etc/ssl/certs/152632.pem (1708 bytes)
	I1009 18:58:42.422367   54061 start.go:296] duration metric: took 142.804384ms for postStartSetup
	I1009 18:58:42.422479   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetConfigRaw
	I1009 18:58:42.423258   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.426192   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426499   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.426529   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.426863   54061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/NoKubernetes-156430/config.json ...
	I1009 18:58:42.427143   54061 start.go:128] duration metric: took 22.88324393s to createHost
	I1009 18:58:42.427175   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.429891   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430321   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.430350   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.430554   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.430735   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.430866   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.431027   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.431224   54061 main.go:141] libmachine: Using SSH client type: native
	I1009 18:58:42.431461   54061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.10 22 <nil> <nil>}
	I1009 18:58:42.431473   54061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:58:42.551194   54061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760036322.526817929
	
	I1009 18:58:42.551223   54061 fix.go:216] guest clock: 1760036322.526817929
	I1009 18:58:42.551235   54061 fix.go:229] Guest: 2025-10-09 18:58:42.526817929 +0000 UTC Remote: 2025-10-09 18:58:42.427160398 +0000 UTC m=+24.708548246 (delta=99.657531ms)
	I1009 18:58:42.551280   54061 fix.go:200] guest clock delta is within tolerance: 99.657531ms
	I1009 18:58:42.551289   54061 start.go:83] releasing machines lock for "NoKubernetes-156430", held for 23.007526235s
	I1009 18:58:42.551317   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.551599   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetIP
	I1009 18:58:42.555353   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.555871   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.555908   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.556160   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556731   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556904   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .DriverName
	I1009 18:58:42.556998   54061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:58:42.557069   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.557138   54061 ssh_runner.go:195] Run: cat /version.json
	I1009 18:58:42.557165   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHHostname
	I1009 18:58:42.560586   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.560975   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561008   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561033   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.561193   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.561393   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.561594   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.561636   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:84:5d", ip: ""} in network mk-NoKubernetes-156430: {Iface:virbr3 ExpiryTime:2025-10-09 19:58:35 +0000 UTC Type:0 Mac:52:54:00:35:84:5d Iaid: IPaddr:192.168.61.10 Prefix:24 Hostname:nokubernetes-156430 Clientid:01:52:54:00:35:84:5d}
	I1009 18:58:42.561916   54061 main.go:141] libmachine: (NoKubernetes-156430) DBG | domain NoKubernetes-156430 has defined IP address 192.168.61.10 and MAC address 52:54:00:35:84:5d in network mk-NoKubernetes-156430
	I1009 18:58:42.562392   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.562797   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHPort
	I1009 18:58:42.563244   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHKeyPath
	I1009 18:58:42.563412   54061 main.go:141] libmachine: (NoKubernetes-156430) Calling .GetSSHUsername
	I1009 18:58:42.563532   54061 sshutil.go:53] new ssh client: &{IP:192.168.61.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/NoKubernetes-156430/id_rsa Username:docker}
	I1009 18:58:42.687591   54061 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:42.696846   54061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:58:42.860249   54061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:58:42.867451   54061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:58:42.867517   54061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:58:42.897113   54061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:58:42.897141   54061 start.go:495] detecting cgroup driver to use...
	I1009 18:58:42.897220   54061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:58:42.919672   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:58:42.942589   54061 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:58:42.942699   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:58:42.965057   54061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:58:42.983975   54061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:58:43.208244   54061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:58:43.409851   54061 docker.go:234] disabling docker service ...
	I1009 18:58:43.409937   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:58:43.431496   54061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:58:43.449349   54061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:58:43.713575   54061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:58:43.917104   54061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:58:43.940403   54061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:58:43.966987   54061 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 18:58:43.967054   54061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 18:58:43.967114   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.985635   54061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:58:43.985708   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:43.999934   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.014371   54061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:58:44.031915   54061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:58:44.047615   54061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:58:44.060030   54061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:58:44.060125   54061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:58:44.088348   54061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:58:44.105749   54061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:58:44.276388   54061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:58:44.400747   54061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:58:44.400833   54061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:58:44.408292   54061 start.go:563] Will wait 60s for crictl version
	I1009 18:58:44.408361   54061 ssh_runner.go:195] Run: which crictl
	I1009 18:58:44.413380   54061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:58:44.465676   54061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:58:44.465768   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.505682   54061 ssh_runner.go:195] Run: crio --version
	I1009 18:58:44.550424   54061 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1009 18:58:44.551824   54061 ssh_runner.go:195] Run: rm -f paused
	I1009 18:58:44.558855   54061 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 18:58:44.562268   54061 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:58:42.553872   54372 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 18:58:42.554150   54372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:42.554213   54372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:42.573562   54372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1009 18:58:42.574189   54372 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:42.574878   54372 main.go:141] libmachine: Using API Version  1
	I1009 18:58:42.574909   54372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:42.575408   54372 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:42.575629   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetMachineName
	I1009 18:58:42.575811   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .DriverName
	I1009 18:58:42.575965   54372 start.go:159] libmachine.API.Create for "force-systemd-env-866940" (driver="kvm2")
	I1009 18:58:42.575996   54372 client.go:168] LocalClient.Create starting
	I1009 18:58:42.576048   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/ca.pem
	I1009 18:58:42.576104   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576129   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576200   54372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11352/.minikube/certs/cert.pem
	I1009 18:58:42.576230   54372 main.go:141] libmachine: Decoding PEM data...
	I1009 18:58:42.576251   54372 main.go:141] libmachine: Parsing certificate...
	I1009 18:58:42.576284   54372 main.go:141] libmachine: Running pre-create checks...
	I1009 18:58:42.576307   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .PreCreateCheck
	I1009 18:58:42.576640   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .GetConfigRaw
	I1009 18:58:42.577094   54372 main.go:141] libmachine: Creating machine...
	I1009 18:58:42.577109   54372 main.go:141] libmachine: (force-systemd-env-866940) Calling .Create
	I1009 18:58:42.577271   54372 main.go:141] libmachine: (force-systemd-env-866940) creating domain...
	I1009 18:58:42.577292   54372 main.go:141] libmachine: (force-systemd-env-866940) creating network...
	I1009 18:58:42.578684   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | found existing default network
	I1009 18:58:42.578863   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network connections='3'>
	I1009 18:58:42.578882   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>default</name>
	I1009 18:58:42.578894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 18:58:42.578906   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <forward mode='nat'>
	I1009 18:58:42.578936   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <nat>
	I1009 18:58:42.578959   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <port start='1024' end='65535'/>
	I1009 18:58:42.578972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </nat>
	I1009 18:58:42.578983   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </forward>
	I1009 18:58:42.578993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 18:58:42.579013   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 18:58:42.579030   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 18:58:42.579055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.579074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 18:58:42.579083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.579091   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.579099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.579106   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.579960   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.579788   54509 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:eb:8e} reservation:<nil>}
	I1009 18:58:42.580630   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.580543   54509 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:2a:69} reservation:<nil>}
	I1009 18:58:42.581452   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.581375   54509 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:85:cd:a4} reservation:<nil>}
	I1009 18:58:42.582428   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.582299   54509 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003429c0}
	I1009 18:58:42.582456   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | defining private network:
	I1009 18:58:42.582477   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.582489   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.582499   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.582514   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.582525   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.582535   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.582546   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.582560   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.582572   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.582579   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.582591   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.588855   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | creating private network mk-force-systemd-env-866940 192.168.72.0/24...
	I1009 18:58:42.674549   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | private network mk-force-systemd-env-866940 192.168.72.0/24 created
	I1009 18:58:42.674894   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <network>
	I1009 18:58:42.674926   54372 main.go:141] libmachine: (force-systemd-env-866940) setting up store path in /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:42.674935   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>mk-force-systemd-env-866940</name>
	I1009 18:58:42.674947   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>e017ca39-b131-46c7-8a35-2b8acbb67618</uuid>
	I1009 18:58:42.674955   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1009 18:58:42.674964   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <mac address='52:54:00:e1:bc:8c'/>
	I1009 18:58:42.674976   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <dns enable='no'/>
	I1009 18:58:42.674986   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1009 18:58:42.675005   54372 main.go:141] libmachine: (force-systemd-env-866940) building disk image from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:58:42.675015   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <dhcp>
	I1009 18:58:42.675024   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1009 18:58:42.675033   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </dhcp>
	I1009 18:58:42.675055   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </ip>
	I1009 18:58:42.675096   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </network>
	I1009 18:58:42.675125   54372 main.go:141] libmachine: (force-systemd-env-866940) Downloading /home/jenkins/minikube-integration/21139-11352/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 18:58:42.675139   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:42.675179   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.674877   54509 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:42.935427   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:42.935240   54509 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/id_rsa...
	I1009 18:58:43.757919   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757713   54509 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk...
	I1009 18:58:43.757972   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing magic tar header
	I1009 18:58:43.757993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | Writing SSH key tar header
	I1009 18:58:43.758008   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | I1009 18:58:43.757830   54509 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 ...
	I1009 18:58:43.758027   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940 (perms=drwx------)
	I1009 18:58:43.758063   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:58:43.758078   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352/.minikube (perms=drwxr-xr-x)
	I1009 18:58:43.758093   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940
	I1009 18:58:43.758110   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube/machines
	I1009 18:58:43.758123   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:58:43.758144   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21139-11352
	I1009 18:58:43.758157   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:58:43.758172   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration/21139-11352 (perms=drwxrwxr-x)
	I1009 18:58:43.758183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home/jenkins
	I1009 18:58:43.758195   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | checking permissions on dir: /home
	I1009 18:58:43.758208   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:58:43.758221   54372 main.go:141] libmachine: (force-systemd-env-866940) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:58:43.758239   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain...
	I1009 18:58:43.758248   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | skipping /home - not owner
	I1009 18:58:43.759588   54372 main.go:141] libmachine: (force-systemd-env-866940) defining domain using XML: 
	I1009 18:58:43.759617   54372 main.go:141] libmachine: (force-systemd-env-866940) <domain type='kvm'>
	I1009 18:58:43.759630   54372 main.go:141] libmachine: (force-systemd-env-866940)   <name>force-systemd-env-866940</name>
	I1009 18:58:43.759642   54372 main.go:141] libmachine: (force-systemd-env-866940)   <memory unit='MiB'>3072</memory>
	I1009 18:58:43.759656   54372 main.go:141] libmachine: (force-systemd-env-866940)   <vcpu>2</vcpu>
	I1009 18:58:43.759667   54372 main.go:141] libmachine: (force-systemd-env-866940)   <features>
	I1009 18:58:43.759680   54372 main.go:141] libmachine: (force-systemd-env-866940)     <acpi/>
	I1009 18:58:43.759686   54372 main.go:141] libmachine: (force-systemd-env-866940)     <apic/>
	I1009 18:58:43.759695   54372 main.go:141] libmachine: (force-systemd-env-866940)     <pae/>
	I1009 18:58:43.759700   54372 main.go:141] libmachine: (force-systemd-env-866940)   </features>
	I1009 18:58:43.759710   54372 main.go:141] libmachine: (force-systemd-env-866940)   <cpu mode='host-passthrough'>
	I1009 18:58:43.759720   54372 main.go:141] libmachine: (force-systemd-env-866940)   </cpu>
	I1009 18:58:43.759728   54372 main.go:141] libmachine: (force-systemd-env-866940)   <os>
	I1009 18:58:43.759738   54372 main.go:141] libmachine: (force-systemd-env-866940)     <type>hvm</type>
	I1009 18:58:43.759778   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='cdrom'/>
	I1009 18:58:43.759807   54372 main.go:141] libmachine: (force-systemd-env-866940)     <boot dev='hd'/>
	I1009 18:58:43.759817   54372 main.go:141] libmachine: (force-systemd-env-866940)     <bootmenu enable='no'/>
	I1009 18:58:43.759827   54372 main.go:141] libmachine: (force-systemd-env-866940)   </os>
	I1009 18:58:43.759841   54372 main.go:141] libmachine: (force-systemd-env-866940)   <devices>
	I1009 18:58:43.759855   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='cdrom'>
	I1009 18:58:43.759874   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.759892   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.759904   54372 main.go:141] libmachine: (force-systemd-env-866940)       <readonly/>
	I1009 18:58:43.759917   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.759931   54372 main.go:141] libmachine: (force-systemd-env-866940)     <disk type='file' device='disk'>
	I1009 18:58:43.759949   54372 main.go:141] libmachine: (force-systemd-env-866940)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:58:43.759967   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.759981   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.759994   54372 main.go:141] libmachine: (force-systemd-env-866940)     </disk>
	I1009 18:58:43.760007   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760019   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.760049   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760077   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760096   54372 main.go:141] libmachine: (force-systemd-env-866940)     <interface type='network'>
	I1009 18:58:43.760108   54372 main.go:141] libmachine: (force-systemd-env-866940)       <source network='default'/>
	I1009 18:58:43.760115   54372 main.go:141] libmachine: (force-systemd-env-866940)       <model type='virtio'/>
	I1009 18:58:43.760124   54372 main.go:141] libmachine: (force-systemd-env-866940)     </interface>
	I1009 18:58:43.760134   54372 main.go:141] libmachine: (force-systemd-env-866940)     <serial type='pty'>
	I1009 18:58:43.760143   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target port='0'/>
	I1009 18:58:43.760157   54372 main.go:141] libmachine: (force-systemd-env-866940)     </serial>
	I1009 18:58:43.760170   54372 main.go:141] libmachine: (force-systemd-env-866940)     <console type='pty'>
	I1009 18:58:43.760181   54372 main.go:141] libmachine: (force-systemd-env-866940)       <target type='serial' port='0'/>
	I1009 18:58:43.760193   54372 main.go:141] libmachine: (force-systemd-env-866940)     </console>
	I1009 18:58:43.760203   54372 main.go:141] libmachine: (force-systemd-env-866940)     <rng model='virtio'>
	I1009 18:58:43.760213   54372 main.go:141] libmachine: (force-systemd-env-866940)       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.760223   54372 main.go:141] libmachine: (force-systemd-env-866940)     </rng>
	I1009 18:58:43.760236   54372 main.go:141] libmachine: (force-systemd-env-866940)   </devices>
	I1009 18:58:43.760249   54372 main.go:141] libmachine: (force-systemd-env-866940) </domain>
	I1009 18:58:43.760272   54372 main.go:141] libmachine: (force-systemd-env-866940) 
	I1009 18:58:43.765904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:78:a8:f7 in network default
	I1009 18:58:43.766797   54372 main.go:141] libmachine: (force-systemd-env-866940) starting domain...
	I1009 18:58:43.766823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | domain force-systemd-env-866940 has defined MAC address 52:54:00:3d:b9:89 in network mk-force-systemd-env-866940
	I1009 18:58:43.766833   54372 main.go:141] libmachine: (force-systemd-env-866940) ensuring networks are active...
	I1009 18:58:43.768013   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network default is active
	I1009 18:58:43.768563   54372 main.go:141] libmachine: (force-systemd-env-866940) Ensuring network mk-force-systemd-env-866940 is active
	I1009 18:58:43.769446   54372 main.go:141] libmachine: (force-systemd-env-866940) getting domain XML...
	I1009 18:58:43.770823   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | starting domain XML:
	I1009 18:58:43.770904   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | <domain type='kvm'>
	I1009 18:58:43.770920   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <name>force-systemd-env-866940</name>
	I1009 18:58:43.770928   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <uuid>01280892-0a35-436e-8b77-3f763c9a68f6</uuid>
	I1009 18:58:43.770945   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 18:58:43.770952   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 18:58:43.770961   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:58:43.770967   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <os>
	I1009 18:58:43.770977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:58:43.770985   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='cdrom'/>
	I1009 18:58:43.770993   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <boot dev='hd'/>
	I1009 18:58:43.771001   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <bootmenu enable='no'/>
	I1009 18:58:43.771010   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </os>
	I1009 18:58:43.771017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <features>
	I1009 18:58:43.771059   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <acpi/>
	I1009 18:58:43.771083   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <apic/>
	I1009 18:58:43.771099   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <pae/>
	I1009 18:58:43.771107   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </features>
	I1009 18:58:43.771122   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:58:43.771131   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <clock offset='utc'/>
	I1009 18:58:43.771151   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:58:43.771163   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:58:43.771189   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <on_crash>destroy</on_crash>
	I1009 18:58:43.771268   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   <devices>
	I1009 18:58:43.772871   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:58:43.772899   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='cdrom'>
	I1009 18:58:43.772910   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:58:43.772924   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/boot2docker.iso'/>
	I1009 18:58:43.772932   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:58:43.772941   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <readonly/>
	I1009 18:58:43.772950   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:58:43.772958   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.772966   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <disk type='file' device='disk'>
	I1009 18:58:43.772977   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:58:43.772991   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source file='/home/jenkins/minikube-integration/21139-11352/.minikube/machines/force-systemd-env-866940/force-systemd-env-866940.rawdisk'/>
	I1009 18:58:43.773017   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:58:43.773051   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:58:43.773065   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </disk>
	I1009 18:58:43.773074   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:58:43.773087   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:58:43.773095   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773108   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:58:43.773124   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:58:43.773138   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:58:43.773148   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </controller>
	I1009 18:58:43.773160   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773170   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:3d:b9:89'/>
	I1009 18:58:43.773183   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='mk-force-systemd-env-866940'/>
	I1009 18:58:43.773193   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773207   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:58:43.773217   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773233   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <interface type='network'>
	I1009 18:58:43.773243   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <mac address='52:54:00:78:a8:f7'/>
	I1009 18:58:43.773260   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <source network='default'/>
	I1009 18:58:43.773270   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <model type='virtio'/>
	I1009 18:58:43.773284   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:58:43.773293   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </interface>
	I1009 18:58:43.773305   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <serial type='pty'>
	I1009 18:58:43.773315   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='isa-serial' port='0'>
	I1009 18:58:43.773327   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |         <model name='isa-serial'/>
	I1009 18:58:43.773336   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       </target>
	I1009 18:58:43.773347   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </serial>
	I1009 18:58:43.773356   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <console type='pty'>
	I1009 18:58:43.773367   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <target type='serial' port='0'/>
	I1009 18:58:43.773376   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </console>
	I1009 18:58:43.773388   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:58:43.773397   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:58:43.773409   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <audio id='1' type='none'/>
	I1009 18:58:43.773419   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <memballoon model='virtio'>
	I1009 18:58:43.773433   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:58:43.773442   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </memballoon>
	I1009 18:58:43.773450   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     <rng model='virtio'>
	I1009 18:58:43.773459   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:58:43.773469   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:58:43.773476   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |     </rng>
	I1009 18:58:43.773503   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG |   </devices>
	I1009 18:58:43.773510   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | </domain>
	I1009 18:58:43.773521   54372 main.go:141] libmachine: (force-systemd-env-866940) DBG | 
	I1009 18:58:44.696815   53754 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: (20.623250496s)
	W1009 18:58:44.696912   53754 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80 3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7 6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279 b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963 d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0 a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 72bad122f46c34970e4d2ca0580d608a13877d58fb4f32cdae8c7fa057094d63 49c8aec88b9627c69092cd8608816552b958bf78abb1bc6417728376f190a500 72009cb0f577a39b2c7661c16d63c6055a3a74cec422f7f2aa325f3948a8795d: Process exited with status 1
	stdout:
	62b7e37b801034d77aa47284b9cdc0a4dd76ff09ede32f88d783535d79307f80
	3bb879e041b8d2ab369df6bf5915da040bf4d92765f020dc254f8f8b8a26cda7
	6e6b0ec09a57191fc894845745ebddc82674cc752eee556cf7d9cbdc58a2115b
	e65fd2ec1c1b83a051f71adf84978e69235a5d4dcf395ff70536b82c6add9279
	b10f7340a8351489320ca618f287f440249a51e5eed10a67da4bd0592809a963
	d2063b656f666fd770f6fed3f4b0323c02abbc1e4650ce33551136968d092bb0
	
	stderr:
	E1009 18:58:44.690910    3543 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist" containerID="a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091"
	time="2025-10-09T18:58:44Z" level=fatal msg="stopping the container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": rpc error: code = NotFound desc = could not find container \"a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091\": container with ID starting with a617de108915dae0e14f431607f416e108cae4d6bc6c57d73f058d9965f7b091 not found: ID does not exist"
	I1009 18:58:44.697010   53754 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:58:44.749170   53754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:58:44.767682   53754 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  9 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Oct  9 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  9 18:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  9 18:57 /etc/kubernetes/scheduler.conf
	
	I1009 18:58:44.767749   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:58:44.781871   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:58:44.796528   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.796591   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:58:44.813206   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.829983   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.830071   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:58:44.847176   53754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:58:44.860411   53754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:44.860489   53754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:58:44.878975   53754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:58:44.899219   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:44.970605   53754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:58:40.378956   52475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:58:40.400540   52475 ssh_runner.go:195] Run: openssl version
	I1009 18:58:40.409075   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:58:40.424861   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430830   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.430906   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:58:40.439375   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:58:40.456353   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15263.pem && ln -fs /usr/share/ca-certificates/15263.pem /etc/ssl/certs/15263.pem"
	I1009 18:58:40.470688   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476162   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:07 /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.476231   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15263.pem
	I1009 18:58:40.483753   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15263.pem /etc/ssl/certs/51391683.0"
	I1009 18:58:40.496302   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152632.pem && ln -fs /usr/share/ca-certificates/152632.pem /etc/ssl/certs/152632.pem"
	I1009 18:58:40.513453   52475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519452   52475 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:07 /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.519520   52475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152632.pem
	I1009 18:58:40.527391   52475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152632.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:58:40.541953   52475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:58:40.548470   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:58:40.557767   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:58:40.565517   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:58:40.572929   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:58:40.580621   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:58:40.588071   52475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:58:40.597465   52475 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-667994 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.153 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:58:40.597566   52475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:58:40.597631   52475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:58:40.642580   52475 cri.go:89] found id: "7b1bfb45a3eaace18d65de2587497b219bf6e3cd798d8c48e231bf1ad257e307"
	I1009 18:58:40.642607   52475 cri.go:89] found id: "20555dbc4eb6b0003b9e7120a568ec710f3b9cfc6a9dbc465b148e97555bf3d3"
	I1009 18:58:40.642612   52475 cri.go:89] found id: "5786f8dd0474b8a2ef87443eeee952136aadfd10370f92cf37e07541a02b70a5"
	I1009 18:58:40.642617   52475 cri.go:89] found id: "768cac5af370455dc385009f432c0d63f62e02688e116b2dec23e64f0894578b"
	I1009 18:58:40.642621   52475 cri.go:89] found id: "d3cfd4255a6edb3154603d5b3ff89b637d21671a133fcc83891af4f6e8a205c4"
	I1009 18:58:40.642624   52475 cri.go:89] found id: "252fc791a47bf2869efe267657a31dc52be38eae30346683b37a301f9ccb7490"
	I1009 18:58:40.642627   52475 cri.go:89] found id: "4593ed25c35b4d5c00b32b02fce74c71137e47c7a00fa840eb6effa737df9cf1"
	I1009 18:58:40.642629   52475 cri.go:89] found id: "3cc8ccc81072eaaa74daa572753c0a6a4c48f52fc71a6775c657b8c33f125b68"
	I1009 18:58:40.642632   52475 cri.go:89] found id: "c1d305c91f1ec6f697cc71695ff4555d0777627b35a9cb3a117ce4ac8070ead5"
	I1009 18:58:40.642639   52475 cri.go:89] found id: "19edec96082f50e67d6381b4cc16aa130713dd9bb9ac86be629415033f890dec"
	I1009 18:58:40.642642   52475 cri.go:89] found id: "ed26a33c61e3ffc9c91ce839a3b1b8244dd3f2f0c615041ef3194575deec434c"
	I1009 18:58:40.642644   52475 cri.go:89] found id: ""
	I1009 18:58:40.642687   52475 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-156430 -n NoKubernetes-156430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p NoKubernetes-156430 -n NoKubernetes-156430: exit status 6 (260.207916ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:58:47.264696   54749 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-156430" does not appear in /home/jenkins/minikube-integration/21139-11352/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-156430" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.71s)

                                                
                                    

Test pass (280/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.32
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.36
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.69
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 91.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 203.28
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.54
35 TestAddons/parallel/Registry 26.96
36 TestAddons/parallel/RegistryCreds 1.15
38 TestAddons/parallel/InspektorGadget 6.57
39 TestAddons/parallel/MetricsServer 5.84
41 TestAddons/parallel/CSI 60.2
42 TestAddons/parallel/Headlamp 31.17
43 TestAddons/parallel/CloudSpanner 5.88
44 TestAddons/parallel/LocalPath 31.29
45 TestAddons/parallel/NvidiaDevicePlugin 6.67
46 TestAddons/parallel/Yakd 11.83
48 TestAddons/StoppedEnableDisable 88.33
49 TestCertOptions 65.74
50 TestCertExpiration 270.7
52 TestForceSystemdFlag 44.4
53 TestForceSystemdEnv 55.81
55 TestKVMDriverInstallOrUpdate 0.95
59 TestErrorSpam/setup 36.49
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.68
63 TestErrorSpam/unpause 1.84
64 TestErrorSpam/stop 79.93
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.01
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 64.73
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
76 TestFunctional/serial/CacheCmd/cache/add_local 2.22
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 36.9
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.5
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.32
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 20.73
92 TestFunctional/parallel/DryRun 0.26
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.83
98 TestFunctional/parallel/ServiceCmdConnect 16.45
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 46.99
102 TestFunctional/parallel/SSHCmd 0.39
103 TestFunctional/parallel/CpCmd 1.28
104 TestFunctional/parallel/MySQL 25.65
105 TestFunctional/parallel/FileSync 0.2
106 TestFunctional/parallel/CertSync 1.16
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.77
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
122 TestFunctional/parallel/ImageCommands/ImageBuild 5.1
123 TestFunctional/parallel/ImageCommands/Setup 1.79
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.17
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
132 TestFunctional/parallel/ServiceCmd/List 0.32
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
135 TestFunctional/parallel/ServiceCmd/Format 0.54
136 TestFunctional/parallel/ServiceCmd/URL 0.36
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.64
139 TestFunctional/parallel/ProfileCmd/profile_list 0.42
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
141 TestFunctional/parallel/MountCmd/any-port 18.89
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
143 TestFunctional/parallel/MountCmd/specific-port 1.78
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 199.16
162 TestMultiControlPlane/serial/DeployApp 6.99
163 TestMultiControlPlane/serial/PingHostFromPods 1.25
164 TestMultiControlPlane/serial/AddWorkerNode 47.33
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
167 TestMultiControlPlane/serial/CopyFile 13.33
168 TestMultiControlPlane/serial/StopSecondaryNode 86.7
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 35.4
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 389.2
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.48
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 229.48
176 TestMultiControlPlane/serial/RestartCluster 96.38
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 72.51
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
183 TestJSONOutput/start/Command 85.07
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.89
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 82.69
215 TestMountStart/serial/StartWithMountFirst 22.16
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 20.27
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.29
222 TestMountStart/serial/RestartStopped 19.86
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 100.03
227 TestMultiNode/serial/DeployApp2Nodes 5.92
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 46.63
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.42
233 TestMultiNode/serial/StopNode 2.45
234 TestMultiNode/serial/StartAfterStop 37.84
235 TestMultiNode/serial/RestartKeepsNodes 318.77
236 TestMultiNode/serial/DeleteNode 2.87
237 TestMultiNode/serial/StopMultiNode 172.11
238 TestMultiNode/serial/RestartMultiNode 118.99
239 TestMultiNode/serial/ValidateNameConflict 42.67
246 TestScheduledStopUnix 109.91
250 TestRunningBinaryUpgrade 175.85
252 TestKubernetesUpgrade 505.41
254 TestStoppedBinaryUpgrade/Setup 2.7
258 TestStoppedBinaryUpgrade/Upgrade 127.26
263 TestNetworkPlugins/group/false 3.29
275 TestPause/serial/Start 104.81
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
279 TestNoKubernetes/serial/StartWithK8s 55.48
280 TestNoKubernetes/serial/StartWithStopK8s 31.3
282 TestNoKubernetes/serial/Start 26.86
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
285 TestNoKubernetes/serial/ProfileList 29.45
286 TestNoKubernetes/serial/Stop 2.18
287 TestNoKubernetes/serial/StartNoArgs 30.99
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
289 TestNetworkPlugins/group/auto/Start 79.71
290 TestNetworkPlugins/group/kindnet/Start 92.27
291 TestNetworkPlugins/group/auto/KubeletFlags 0.23
292 TestNetworkPlugins/group/auto/NetCatPod 11.26
293 TestNetworkPlugins/group/auto/DNS 0.2
294 TestNetworkPlugins/group/auto/Localhost 0.15
295 TestNetworkPlugins/group/auto/HairPin 0.14
296 TestNetworkPlugins/group/calico/Start 76.94
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
299 TestNetworkPlugins/group/kindnet/NetCatPod 40.25
300 TestNetworkPlugins/group/kindnet/DNS 0.17
301 TestNetworkPlugins/group/kindnet/Localhost 0.14
302 TestNetworkPlugins/group/kindnet/HairPin 0.16
303 TestNetworkPlugins/group/calico/ControllerPod 6.01
304 TestNetworkPlugins/group/calico/KubeletFlags 0.28
305 TestNetworkPlugins/group/calico/NetCatPod 12.45
306 TestNetworkPlugins/group/custom-flannel/Start 69.79
307 TestNetworkPlugins/group/enable-default-cni/Start 72.44
308 TestNetworkPlugins/group/calico/DNS 0.16
309 TestNetworkPlugins/group/calico/Localhost 0.13
310 TestNetworkPlugins/group/calico/HairPin 0.13
311 TestNetworkPlugins/group/flannel/Start 93.61
312 TestNetworkPlugins/group/bridge/Start 86.15
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.78
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.57
317 TestNetworkPlugins/group/custom-flannel/DNS 0.18
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
324 TestStartStop/group/old-k8s-version/serial/FirstStart 96.54
326 TestStartStop/group/no-preload/serial/FirstStart 116.5
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
329 TestNetworkPlugins/group/bridge/NetCatPod 12.29
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
331 TestNetworkPlugins/group/flannel/NetCatPod 13.29
332 TestNetworkPlugins/group/bridge/DNS 0.19
333 TestNetworkPlugins/group/bridge/Localhost 0.19
334 TestNetworkPlugins/group/bridge/HairPin 0.31
335 TestNetworkPlugins/group/flannel/DNS 0.19
336 TestNetworkPlugins/group/flannel/Localhost 0.15
337 TestNetworkPlugins/group/flannel/HairPin 0.13
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.02
341 TestStartStop/group/newest-cni/serial/FirstStart 67.01
342 TestStartStop/group/old-k8s-version/serial/DeployApp 10.45
343 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.96
344 TestStartStop/group/old-k8s-version/serial/Stop 85.29
345 TestStartStop/group/no-preload/serial/DeployApp 10.31
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
348 TestStartStop/group/newest-cni/serial/Stop 10.58
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/no-preload/serial/Stop 89.85
351 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
352 TestStartStop/group/newest-cni/serial/SecondStart 35.4
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.72
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
359 TestStartStop/group/newest-cni/serial/Pause 2.73
361 TestStartStop/group/embed-certs/serial/FirstStart 53.62
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
363 TestStartStop/group/old-k8s-version/serial/SecondStart 43.7
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/no-preload/serial/SecondStart 60.54
366 TestStartStop/group/embed-certs/serial/DeployApp 10.33
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
369 TestStartStop/group/embed-certs/serial/Stop 84.64
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.77
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
374 TestStartStop/group/old-k8s-version/serial/Pause 3.39
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
378 TestStartStop/group/no-preload/serial/Pause 3.09
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
384 TestStartStop/group/embed-certs/serial/SecondStart 45.23
385 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.01
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
388 TestStartStop/group/embed-certs/serial/Pause 2.7
x
+
TestDownloadOnly/v1.28.0/json-events (23.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-581705 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-581705 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.324428621s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 17:56:52.189565   15263 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 17:56:52.189686   15263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-581705
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-581705: exit status 85 (63.165791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-581705 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-581705 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:56:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:56:28.906150   15275 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:28.906446   15275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:28.906457   15275 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:28.906461   15275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:28.906647   15275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	W1009 17:56:28.906776   15275 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-11352/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-11352/.minikube/config/config.json: no such file or directory
	I1009 17:56:28.907909   15275 out.go:368] Setting JSON to true
	I1009 17:56:28.908914   15275 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2329,"bootTime":1760030260,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:28.909006   15275 start.go:141] virtualization: kvm guest
	I1009 17:56:28.911152   15275 out.go:99] [download-only-581705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:56:28.911306   15275 notify.go:220] Checking for updates...
	W1009 17:56:28.911312   15275 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 17:56:28.912566   15275 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:56:28.914080   15275 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:28.915974   15275 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 17:56:28.917416   15275 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 17:56:28.918866   15275 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:56:28.921268   15275 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:56:28.921496   15275 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:29.430736   15275 out.go:99] Using the kvm2 driver based on user configuration
	I1009 17:56:29.430808   15275 start.go:305] selected driver: kvm2
	I1009 17:56:29.430816   15275 start.go:925] validating driver "kvm2" against <nil>
	I1009 17:56:29.431250   15275 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:56:29.431418   15275 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:56:29.447625   15275 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:56:29.447659   15275 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:56:29.461783   15275 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:56:29.461831   15275 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:29.462373   15275 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1009 17:56:29.462547   15275 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:56:29.462584   15275 cni.go:84] Creating CNI manager for ""
	I1009 17:56:29.462632   15275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 17:56:29.462638   15275 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 17:56:29.462699   15275 start.go:349] cluster config:
	{Name:download-only-581705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-581705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:56:29.462868   15275 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:56:29.464794   15275 out.go:99] Downloading VM boot image ...
	I1009 17:56:29.464825   15275 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21139-11352/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 17:56:39.761468   15275 out.go:99] Starting "download-only-581705" primary control-plane node in "download-only-581705" cluster
	I1009 17:56:39.761512   15275 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 17:56:39.854715   15275 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 17:56:39.854765   15275 cache.go:64] Caching tarball of preloaded images
	I1009 17:56:39.854980   15275 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 17:56:39.856829   15275 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 17:56:39.856861   15275 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 17:56:39.955982   15275 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1009 17:56:39.956128   15275 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-581705 host does not exist
	  To start a cluster, run: "minikube start -p download-only-581705"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-581705
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-712312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-712312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.364519924s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 17:57:04.907591   15263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 17:57:04.907715   15263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-712312
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-712312: exit status 85 (685.641908ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-581705 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-581705 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ delete  │ -p download-only-581705                                                                                                                                                                             │ download-only-581705 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ -o=json --download-only -p download-only-712312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-712312 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:56:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:56:52.584660   15542 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:52.584913   15542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:52.584922   15542 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:52.584926   15542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:52.585153   15542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 17:56:52.585619   15542 out.go:368] Setting JSON to true
	I1009 17:56:52.586423   15542 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2353,"bootTime":1760030260,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:52.586516   15542 start.go:141] virtualization: kvm guest
	I1009 17:56:52.588525   15542 out.go:99] [download-only-712312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:56:52.588693   15542 notify.go:220] Checking for updates...
	I1009 17:56:52.590286   15542 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:56:52.592011   15542 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:52.594238   15542 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 17:56:52.595956   15542 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 17:56:52.597332   15542 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:56:52.600061   15542 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:56:52.600345   15542 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:52.635336   15542 out.go:99] Using the kvm2 driver based on user configuration
	I1009 17:56:52.635372   15542 start.go:305] selected driver: kvm2
	I1009 17:56:52.635378   15542 start.go:925] validating driver "kvm2" against <nil>
	I1009 17:56:52.635707   15542 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:56:52.635782   15542 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:56:52.649715   15542 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:56:52.649759   15542 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21139-11352/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 17:56:52.663664   15542 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 17:56:52.663709   15542 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:52.664272   15542 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1009 17:56:52.664418   15542 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:56:52.664440   15542 cni.go:84] Creating CNI manager for ""
	I1009 17:56:52.664480   15542 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 17:56:52.664489   15542 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 17:56:52.664544   15542 start.go:349] cluster config:
	{Name:download-only-712312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-712312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:56:52.664625   15542 iso.go:125] acquiring lock: {Name:mk7cd771afdec68e2f33c9b863985d7ad8364238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:56:52.666551   15542 out.go:99] Starting "download-only-712312" primary control-plane node in "download-only-712312" cluster
	I1009 17:56:52.666593   15542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:56:53.079767   15542 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 17:56:53.079814   15542 cache.go:64] Caching tarball of preloaded images
	I1009 17:56:53.079995   15542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:56:53.081885   15542 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1009 17:56:53.081908   15542 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 17:56:53.175365   15542 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1009 17:56:53.175413   15542 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21139-11352/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-712312 host does not exist
	  To start a cluster, run: "minikube start -p download-only-712312"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-712312
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 17:57:06.142855   15263 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-168507 --alsologtostderr --binary-mirror http://127.0.0.1:35065 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-168507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-168507
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (91.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-636274 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-636274 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.752260471s)
helpers_test.go:175: Cleaning up "offline-crio-636274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-636274
--- PASS: TestOffline (91.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-676842
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-676842: exit status 85 (55.051593ms)

                                                
                                                
-- stdout --
	* Profile "addons-676842" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-676842"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-676842
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-676842: exit status 85 (51.018568ms)

                                                
                                                
-- stdout --
	* Profile "addons-676842" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-676842"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (203.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-676842 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-676842 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m23.280419483s)
--- PASS: TestAddons/Setup (203.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-676842 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-676842 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-676842 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-676842 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84f451ab-c2a7-43a2-9d98-5ba2301830da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84f451ab-c2a7-43a2-9d98-5ba2301830da] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004952839s
addons_test.go:694: (dbg) Run:  kubectl --context addons-676842 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-676842 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-676842 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.009353ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-x48c9" [806d8e0f-ce02-4232-a9eb-bec6922740c7] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004635026s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tx9d8" [f00c3245-ca52-4084-b94e-fa011c398b29] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006963175s
addons_test.go:392: (dbg) Run:  kubectl --context addons-676842 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-676842 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-676842 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.659538132s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 ip
2025/10/09 18:01:16 [DEBUG] GET http://192.168.39.66:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable registry --alsologtostderr -v=1: (1.101482316s)
--- PASS: TestAddons/parallel/Registry (26.96s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.15s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.881237ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-676842
addons_test.go:332: (dbg) Run:  kubectl --context addons-676842 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kctxd" [14d40841-af19-4da7-b7d3-87b6d61ac2a0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006373184s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.979527ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fvlpf" [406cdfbe-5d60-4f02-a0fc-fddb931c905e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005302901s
addons_test.go:463: (dbg) Run:  kubectl --context addons-676842 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:01:03.186844   15263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 18:01:03.195426   15263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:01:03.195451   15263 kapi.go:107] duration metric: took 8.618415ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.627364ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-676842 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-676842 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4bcf7e24-e463-4f88-b31b-686556d56582] Pending
helpers_test.go:352: "task-pv-pod" [4bcf7e24-e463-4f88-b31b-686556d56582] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4bcf7e24-e463-4f88-b31b-686556d56582] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.00587814s
addons_test.go:572: (dbg) Run:  kubectl --context addons-676842 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-676842 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-676842 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-676842 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-676842 delete pod task-pv-pod: (1.290095664s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-676842 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-676842 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-676842 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [80624599-bde6-4c8e-9018-1d34f3751551] Pending
helpers_test.go:352: "task-pv-pod-restore" [80624599-bde6-4c8e-9018-1d34f3751551] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [80624599-bde6-4c8e-9018-1d34f3751551] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005040851s
addons_test.go:614: (dbg) Run:  kubectl --context addons-676842 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-676842 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-676842 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.932814233s)
--- PASS: TestAddons/parallel/CSI (60.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (31.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-676842 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-j2r26" [b5a902ab-c122-40a2-82b4-521361313ceb] Pending
helpers_test.go:352: "headlamp-6945c6f4d-j2r26" [b5a902ab-c122-40a2-82b4-521361313ceb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-j2r26" [b5a902ab-c122-40a2-82b4-521361313ceb] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 24.003910318s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable headlamp --alsologtostderr -v=1: (6.185508512s)
--- PASS: TestAddons/parallel/Headlamp (31.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-2s29q" [85f858fa-b624-4dce-9914-30c5ff6845e3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006071996s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (31.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-676842 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-676842 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [ea03c8b6-8b30-4009-8137-107da26169f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [ea03c8b6-8b30-4009-8137-107da26169f7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [ea03c8b6-8b30-4009-8137-107da26169f7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 13.005705723s
addons_test.go:967: (dbg) Run:  kubectl --context addons-676842 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 ssh "cat /opt/local-path-provisioner/pvc-0a963da0-6088-440e-83a8-98817e7b62a4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-676842 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-676842 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (31.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qj474" [c511ee7e-c0bc-4960-94e2-a78daede3a40] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005380215s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lcwvv" [2033850b-b227-4b93-9804-dcec078dd7be] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006893863s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-676842 addons disable yakd --alsologtostderr -v=1: (5.822222377s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-676842
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-676842: (1m28.057084366s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-676842
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-676842
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-676842
--- PASS: TestAddons/StoppedEnableDisable (88.33s)

                                                
                                    
x
+
TestCertOptions (65.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-669023 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-669023 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.366869145s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-669023 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-669023 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-669023 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-669023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-669023
E1009 19:00:30.818238   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestCertOptions (65.74s)

                                                
                                    
x
+
TestCertExpiration (270.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-256681 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-256681 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.636380344s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-256681 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-256681 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.150895706s)
helpers_test.go:175: Cleaning up "cert-expiration-256681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-256681
--- PASS: TestCertExpiration (270.70s)

                                                
                                    
x
+
TestForceSystemdFlag (44.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-026602 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.284865506s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-026602 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-026602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-026602
--- PASS: TestForceSystemdFlag (44.40s)

                                                
                                    
x
+
TestForceSystemdEnv (55.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-866940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.835169258s)
helpers_test.go:175: Cleaning up "force-systemd-env-866940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-866940
--- PASS: TestForceSystemdEnv (55.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.95s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1009 18:58:29.005868   15263 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 18:58:29.005996   15263 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3540988297/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 18:58:29.036586   15263 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3540988297/001/docker-machine-driver-kvm2 version is 1.1.1
W1009 18:58:29.036637   15263 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1009 18:58:29.036757   15263 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 18:58:29.036814   15263 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3540988297/001/docker-machine-driver-kvm2
I1009 18:58:29.817393   15263 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3540988297/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 18:58:29.834383   15263 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3540988297/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.95s)

                                                
                                    
x
+
TestErrorSpam/setup (36.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-354007 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-354007 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-354007 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-354007 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.489114877s)
--- PASS: TestErrorSpam/setup (36.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (79.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop: (1m17.194428366s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop: (1.065965947s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-354007 --log_dir /tmp/nospam-354007 stop: (1.667826522s)
--- PASS: TestErrorSpam/stop (79.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-11352/.minikube/files/etc/test/nested/copy/15263/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-396225 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.007863404s)
--- PASS: TestFunctional/serial/StartWithProxy (80.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (64.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:08:53.654370   15263 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-396225 --alsologtostderr -v=8: (1m4.731949661s)
functional_test.go:678: soft start took 1m4.732639473s for "functional-396225" cluster.
I1009 18:09:58.386736   15263 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (64.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-396225 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:3.1: (1.124626613s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:3.3: (1.25615373s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 cache add registry.k8s.io/pause:latest: (1.184510842s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-396225 /tmp/TestFunctionalserialCacheCmdcacheadd_local4163583717/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache add minikube-local-cache-test:functional-396225
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 cache add minikube-local-cache-test:functional-396225: (1.842372104s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache delete minikube-local-cache-test:functional-396225
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-396225
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.584275ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 cache reload: (1.039286291s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 kubectl -- --context functional-396225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-396225 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 18:10:30.818397   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:30.824922   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:30.836384   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:30.857883   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:30.899705   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:30.981805   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:31.143270   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:31.464743   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:32.106596   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:33.388337   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:35.951190   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:10:41.072749   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-396225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.89828172s)
functional_test.go:776: restart took 36.898379609s for "functional-396225" cluster.
I1009 18:10:43.586588   15263 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-396225 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 logs: (1.498100185s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 logs --file /tmp/TestFunctionalserialLogsFileCmd1425786236/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 logs --file /tmp/TestFunctionalserialLogsFileCmd1425786236/001/logs.txt: (1.480252474s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-396225 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-396225
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-396225: exit status 115 (280.733891ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.199:32576 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-396225 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 config get cpus: exit status 14 (57.959414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 config get cpus: exit status 14 (50.616686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-396225 --alsologtostderr -v=1]
E1009 18:11:11.797262   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-396225 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 24607: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-396225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (129.051628ms)

                                                
                                                
-- stdout --
	* [functional-396225] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:11:09.669215   24499 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:11:09.669492   24499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:11:09.669503   24499 out.go:374] Setting ErrFile to fd 2...
	I1009 18:11:09.669507   24499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:11:09.669757   24499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:11:09.670254   24499 out.go:368] Setting JSON to false
	I1009 18:11:09.671224   24499 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3210,"bootTime":1760030260,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:11:09.671315   24499 start.go:141] virtualization: kvm guest
	I1009 18:11:09.673300   24499 out.go:179] * [functional-396225] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:11:09.675166   24499 notify.go:220] Checking for updates...
	I1009 18:11:09.675204   24499 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:11:09.676620   24499 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:11:09.677970   24499 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:11:09.679301   24499 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:11:09.680610   24499 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:11:09.681798   24499 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:11:09.683322   24499 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:11:09.683747   24499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:11:09.683792   24499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:11:09.697328   24499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1009 18:11:09.697781   24499 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:11:09.698424   24499 main.go:141] libmachine: Using API Version  1
	I1009 18:11:09.698450   24499 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:11:09.698847   24499 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:11:09.699020   24499 main.go:141] libmachine: (functional-396225) Calling .DriverName
	I1009 18:11:09.699285   24499 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:11:09.699571   24499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:11:09.699622   24499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:11:09.713420   24499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I1009 18:11:09.713952   24499 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:11:09.714401   24499 main.go:141] libmachine: Using API Version  1
	I1009 18:11:09.714421   24499 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:11:09.714712   24499 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:11:09.714873   24499 main.go:141] libmachine: (functional-396225) Calling .DriverName
	I1009 18:11:09.746747   24499 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 18:11:09.747941   24499 start.go:305] selected driver: kvm2
	I1009 18:11:09.747954   24499 start.go:925] validating driver "kvm2" against &{Name:functional-396225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-396225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:11:09.748085   24499 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:11:09.750081   24499 out.go:203] 
	W1009 18:11:09.751703   24499 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:11:09.752932   24499 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-396225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-396225 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (136.841929ms)

                                                
                                                
-- stdout --
	* [functional-396225] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:11:09.537821   24471 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:11:09.537941   24471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:11:09.537947   24471 out.go:374] Setting ErrFile to fd 2...
	I1009 18:11:09.537953   24471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:11:09.538311   24471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:11:09.538782   24471 out.go:368] Setting JSON to false
	I1009 18:11:09.539711   24471 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3210,"bootTime":1760030260,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:11:09.539803   24471 start.go:141] virtualization: kvm guest
	I1009 18:11:09.542005   24471 out.go:179] * [functional-396225] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:11:09.543162   24471 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:11:09.543157   24471 notify.go:220] Checking for updates...
	I1009 18:11:09.545411   24471 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:11:09.546660   24471 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:11:09.547799   24471 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:11:09.548871   24471 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:11:09.549991   24471 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:11:09.551543   24471 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:11:09.552010   24471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:11:09.552074   24471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:11:09.565427   24471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I1009 18:11:09.565875   24471 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:11:09.566373   24471 main.go:141] libmachine: Using API Version  1
	I1009 18:11:09.566409   24471 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:11:09.566792   24471 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:11:09.567000   24471 main.go:141] libmachine: (functional-396225) Calling .DriverName
	I1009 18:11:09.567260   24471 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:11:09.567570   24471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:11:09.567636   24471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:11:09.581309   24471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I1009 18:11:09.581810   24471 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:11:09.582378   24471 main.go:141] libmachine: Using API Version  1
	I1009 18:11:09.582406   24471 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:11:09.582719   24471 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:11:09.582882   24471 main.go:141] libmachine: (functional-396225) Calling .DriverName
	I1009 18:11:09.614233   24471 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1009 18:11:09.615325   24471 start.go:305] selected driver: kvm2
	I1009 18:11:09.615339   24471 start.go:925] validating driver "kvm2" against &{Name:functional-396225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-396225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:11:09.615437   24471 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:11:09.620609   24471 out.go:203] 
	W1009 18:11:09.622028   24471 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:11:09.623326   24471 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-396225 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-396225 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bh8kb" [f2fd76d7-d0c2-4757-8356-16011d9889b1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bh8kb" [f2fd76d7-d0c2-4757-8356-16011d9889b1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.004928281s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.199:32107
functional_test.go:1680: http://192.168.39.199:32107: success! body:
Request served by hello-node-connect-7d85dfc575-bh8kb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.199:32107
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2391768b-b4ad-410c-80b0-f1c9283deb1c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005930289s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-396225 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-396225 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-396225 get pvc myclaim -o=json
I1009 18:10:57.258062   15263 retry.go:31] will retry after 2.345387658s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:80d4468a-6841-4215-93af-5181956ef3ac ResourceVersion:837 Generation:0 CreationTimestamp:2025-10-09 18:10:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a6c510 VolumeMode:0xc001a6c520 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-396225 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-396225 apply -f testdata/storage-provisioner/pod.yaml
I1009 18:10:59.817462   15263 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [129fac19-3307-4643-bc4c-95d1e26b706e] Pending
helpers_test.go:352: "sp-pod" [129fac19-3307-4643-bc4c-95d1e26b706e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [129fac19-3307-4643-bc4c-95d1e26b706e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.00373905s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-396225 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-396225 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-396225 delete -f testdata/storage-provisioner/pod.yaml: (1.726894899s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-396225 apply -f testdata/storage-provisioner/pod.yaml
I1009 18:11:23.826499   15263 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [096073a8-fb9b-4d1d-9949-6967b5460cbe] Pending
helpers_test.go:352: "sp-pod" [096073a8-fb9b-4d1d-9949-6967b5460cbe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [096073a8-fb9b-4d1d-9949-6967b5460cbe] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004306688s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-396225 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh -n functional-396225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cp functional-396225:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2373690436/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh -n functional-396225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh -n functional-396225 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-396225 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-74czz" [61faccf9-a983-43a7-a724-b4334354e486] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-74czz" [61faccf9-a983-43a7-a724-b4334354e486] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.007748026s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;": exit status 1 (207.23365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:11:14.085744   15263 retry.go:31] will retry after 1.405128295s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;": exit status 1 (122.72603ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:11:15.614836   15263 retry.go:31] will retry after 1.217525203s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;": exit status 1 (115.308859ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:11:16.948266   15263 retry.go:31] will retry after 2.224800959s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-396225 exec mysql-5bb876957f-74czz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/15263/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /etc/test/nested/copy/15263/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/15263.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /etc/ssl/certs/15263.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/15263.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /usr/share/ca-certificates/15263.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/152632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /etc/ssl/certs/152632.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/152632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /usr/share/ca-certificates/152632.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-396225 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "sudo systemctl is-active docker": exit status 1 (210.141222ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "sudo systemctl is-active containerd": exit status 1 (211.325108ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
E1009 18:10:51.315087   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-396225 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-396225 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-mjd5l" [27f470a3-157b-4416-accb-58d33963c022] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-mjd5l" [27f470a3-157b-4416-accb-58d33963c022] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005677058s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396225 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-396225
localhost/kicbase/echo-server:functional-396225
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396225 image ls --format short --alsologtostderr:
I1009 18:11:26.873896   25441 out.go:360] Setting OutFile to fd 1 ...
I1009 18:11:26.874180   25441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:26.874190   25441 out.go:374] Setting ErrFile to fd 2...
I1009 18:11:26.874196   25441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:26.874396   25441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
I1009 18:11:26.874997   25441 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:26.875122   25441 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:26.875511   25441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:26.875586   25441 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:26.889697   25441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
I1009 18:11:26.890224   25441 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:26.890684   25441 main.go:141] libmachine: Using API Version  1
I1009 18:11:26.890707   25441 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:26.891185   25441 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:26.891463   25441 main.go:141] libmachine: (functional-396225) Calling .GetState
I1009 18:11:26.893605   25441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:26.893662   25441 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:26.907367   25441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
I1009 18:11:26.907850   25441 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:26.908308   25441 main.go:141] libmachine: Using API Version  1
I1009 18:11:26.908331   25441 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:26.908721   25441 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:26.908917   25441 main.go:141] libmachine: (functional-396225) Calling .DriverName
I1009 18:11:26.909176   25441 ssh_runner.go:195] Run: systemctl --version
I1009 18:11:26.909212   25441 main.go:141] libmachine: (functional-396225) Calling .GetSSHHostname
I1009 18:11:26.912169   25441 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:26.912664   25441 main.go:141] libmachine: (functional-396225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:0b:0d", ip: ""} in network mk-functional-396225: {Iface:virbr1 ExpiryTime:2025-10-09 19:07:48 +0000 UTC Type:0 Mac:52:54:00:c5:0b:0d Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-396225 Clientid:01:52:54:00:c5:0b:0d}
I1009 18:11:26.912702   25441 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined IP address 192.168.39.199 and MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:26.912950   25441 main.go:141] libmachine: (functional-396225) Calling .GetSSHPort
I1009 18:11:26.913153   25441 main.go:141] libmachine: (functional-396225) Calling .GetSSHKeyPath
I1009 18:11:26.913338   25441 main.go:141] libmachine: (functional-396225) Calling .GetSSHUsername
I1009 18:11:26.913480   25441 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/functional-396225/id_rsa Username:docker}
I1009 18:11:27.001285   25441 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:11:27.051814   25441 main.go:141] libmachine: Making call to close driver server
I1009 18:11:27.051832   25441 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:27.052130   25441 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:27.052147   25441 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:27.052151   25441 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
I1009 18:11:27.052161   25441 main.go:141] libmachine: Making call to close driver server
I1009 18:11:27.052170   25441 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:27.052474   25441 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:27.052504   25441 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
I1009 18:11:27.052522   25441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396225 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-396225  │ 9056ab77afb8e │ 4.95MB │
│ localhost/minikube-local-cache-test     │ functional-396225  │ 218092d980b17 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396225 image ls --format table --alsologtostderr:
I1009 18:11:30.897142   25569 out.go:360] Setting OutFile to fd 1 ...
I1009 18:11:30.897415   25569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:30.897426   25569 out.go:374] Setting ErrFile to fd 2...
I1009 18:11:30.897430   25569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:30.897628   25569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
I1009 18:11:30.898206   25569 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:30.898299   25569 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:30.898690   25569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:30.898745   25569 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:30.912678   25569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
I1009 18:11:30.913169   25569 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:30.913708   25569 main.go:141] libmachine: Using API Version  1
I1009 18:11:30.913732   25569 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:30.914109   25569 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:30.914344   25569 main.go:141] libmachine: (functional-396225) Calling .GetState
I1009 18:11:30.916491   25569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:30.916541   25569 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:30.930232   25569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
I1009 18:11:30.930677   25569 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:30.931135   25569 main.go:141] libmachine: Using API Version  1
I1009 18:11:30.931155   25569 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:30.931526   25569 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:30.931728   25569 main.go:141] libmachine: (functional-396225) Calling .DriverName
I1009 18:11:30.931911   25569 ssh_runner.go:195] Run: systemctl --version
I1009 18:11:30.931933   25569 main.go:141] libmachine: (functional-396225) Calling .GetSSHHostname
I1009 18:11:30.935031   25569 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:30.935517   25569 main.go:141] libmachine: (functional-396225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:0b:0d", ip: ""} in network mk-functional-396225: {Iface:virbr1 ExpiryTime:2025-10-09 19:07:48 +0000 UTC Type:0 Mac:52:54:00:c5:0b:0d Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-396225 Clientid:01:52:54:00:c5:0b:0d}
I1009 18:11:30.935556   25569 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined IP address 192.168.39.199 and MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:30.935730   25569 main.go:141] libmachine: (functional-396225) Calling .GetSSHPort
I1009 18:11:30.935882   25569 main.go:141] libmachine: (functional-396225) Calling .GetSSHKeyPath
I1009 18:11:30.936031   25569 main.go:141] libmachine: (functional-396225) Calling .GetSSHUsername
I1009 18:11:30.936152   25569 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/functional-396225/id_rsa Username:docker}
I1009 18:11:31.017709   25569 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:11:31.099019   25569 main.go:141] libmachine: Making call to close driver server
I1009 18:11:31.099060   25569 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:31.099373   25569 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:31.099384   25569 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
I1009 18:11:31.099391   25569 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:31.099400   25569 main.go:141] libmachine: Making call to close driver server
I1009 18:11:31.099406   25569 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:31.099657   25569 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:31.099672   25569 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396225 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-396225"],"size":"4945146"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf1400
4181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha25
6:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367
a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/s
torage-provisioner:v5"],"size":"31470524"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa99
73486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"218092d980b1762dfa4a4d4d53b7f6c82eed14d4e7bc893777411cdc33f1cf91","repoDigests":["localhost/minikube-local-cache-test@sha256:6de4d914f73c02fdede724aa31efbf893fb6934502d8c5478dc1178bfc584b5b"],"repoTags":["localhost/minikube-local-cache-test:functional-396225"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b809
4a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396225 image ls --format json --alsologtostderr:
I1009 18:11:30.659425   25545 out.go:360] Setting OutFile to fd 1 ...
I1009 18:11:30.659663   25545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:30.659668   25545 out.go:374] Setting ErrFile to fd 2...
I1009 18:11:30.659672   25545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:30.659861   25545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
I1009 18:11:30.660452   25545 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:30.660538   25545 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:30.660868   25545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:30.660928   25545 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:30.674797   25545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
I1009 18:11:30.675440   25545 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:30.676060   25545 main.go:141] libmachine: Using API Version  1
I1009 18:11:30.676088   25545 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:30.676496   25545 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:30.676732   25545 main.go:141] libmachine: (functional-396225) Calling .GetState
I1009 18:11:30.678674   25545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:30.678717   25545 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:30.692430   25545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
I1009 18:11:30.692941   25545 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:30.693375   25545 main.go:141] libmachine: Using API Version  1
I1009 18:11:30.693399   25545 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:30.693766   25545 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:30.694012   25545 main.go:141] libmachine: (functional-396225) Calling .DriverName
I1009 18:11:30.694238   25545 ssh_runner.go:195] Run: systemctl --version
I1009 18:11:30.694263   25545 main.go:141] libmachine: (functional-396225) Calling .GetSSHHostname
I1009 18:11:30.697574   25545 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:30.698096   25545 main.go:141] libmachine: (functional-396225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:0b:0d", ip: ""} in network mk-functional-396225: {Iface:virbr1 ExpiryTime:2025-10-09 19:07:48 +0000 UTC Type:0 Mac:52:54:00:c5:0b:0d Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-396225 Clientid:01:52:54:00:c5:0b:0d}
I1009 18:11:30.698128   25545 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined IP address 192.168.39.199 and MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:30.698350   25545 main.go:141] libmachine: (functional-396225) Calling .GetSSHPort
I1009 18:11:30.698555   25545 main.go:141] libmachine: (functional-396225) Calling .GetSSHKeyPath
I1009 18:11:30.698724   25545 main.go:141] libmachine: (functional-396225) Calling .GetSSHUsername
I1009 18:11:30.698905   25545 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/functional-396225/id_rsa Username:docker}
I1009 18:11:30.794173   25545 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:11:30.846269   25545 main.go:141] libmachine: Making call to close driver server
I1009 18:11:30.846286   25545 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:30.846587   25545 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:30.846606   25545 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:30.846616   25545 main.go:141] libmachine: Making call to close driver server
I1009 18:11:30.846615   25545 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
I1009 18:11:30.846624   25545 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:30.846819   25545 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:30.846832   25545 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396225 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 218092d980b1762dfa4a4d4d53b7f6c82eed14d4e7bc893777411cdc33f1cf91
repoDigests:
- localhost/minikube-local-cache-test@sha256:6de4d914f73c02fdede724aa31efbf893fb6934502d8c5478dc1178bfc584b5b
repoTags:
- localhost/minikube-local-cache-test:functional-396225
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-396225
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396225 image ls --format yaml --alsologtostderr:
I1009 18:11:27.111313   25466 out.go:360] Setting OutFile to fd 1 ...
I1009 18:11:27.111667   25466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:27.111682   25466 out.go:374] Setting ErrFile to fd 2...
I1009 18:11:27.111688   25466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:27.112008   25466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
I1009 18:11:27.112930   25466 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:27.113099   25466 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:27.113688   25466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:27.113777   25466 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:27.127665   25466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
I1009 18:11:27.128319   25466 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:27.128964   25466 main.go:141] libmachine: Using API Version  1
I1009 18:11:27.128995   25466 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:27.129362   25466 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:27.129561   25466 main.go:141] libmachine: (functional-396225) Calling .GetState
I1009 18:11:27.132096   25466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:27.132152   25466 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:27.145800   25466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
I1009 18:11:27.146291   25466 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:27.146736   25466 main.go:141] libmachine: Using API Version  1
I1009 18:11:27.146756   25466 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:27.147148   25466 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:27.147370   25466 main.go:141] libmachine: (functional-396225) Calling .DriverName
I1009 18:11:27.147615   25466 ssh_runner.go:195] Run: systemctl --version
I1009 18:11:27.147644   25466 main.go:141] libmachine: (functional-396225) Calling .GetSSHHostname
I1009 18:11:27.151218   25466 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:27.151724   25466 main.go:141] libmachine: (functional-396225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:0b:0d", ip: ""} in network mk-functional-396225: {Iface:virbr1 ExpiryTime:2025-10-09 19:07:48 +0000 UTC Type:0 Mac:52:54:00:c5:0b:0d Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-396225 Clientid:01:52:54:00:c5:0b:0d}
I1009 18:11:27.151756   25466 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined IP address 192.168.39.199 and MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:27.151947   25466 main.go:141] libmachine: (functional-396225) Calling .GetSSHPort
I1009 18:11:27.152159   25466 main.go:141] libmachine: (functional-396225) Calling .GetSSHKeyPath
I1009 18:11:27.152330   25466 main.go:141] libmachine: (functional-396225) Calling .GetSSHUsername
I1009 18:11:27.152495   25466 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/functional-396225/id_rsa Username:docker}
I1009 18:11:27.246725   25466 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:11:27.323200   25466 main.go:141] libmachine: Making call to close driver server
I1009 18:11:27.323213   25466 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:27.323466   25466 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:27.323480   25466 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:27.323488   25466 main.go:141] libmachine: Making call to close driver server
I1009 18:11:27.323494   25466 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:27.323738   25466 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:27.323776   25466 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:27.323849   25466 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh pgrep buildkitd: exit status 1 (213.942705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image build -t localhost/my-image:functional-396225 testdata/build --alsologtostderr
2025/10/09 18:11:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 image build -t localhost/my-image:functional-396225 testdata/build --alsologtostderr: (4.654658895s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-396225 image build -t localhost/my-image:functional-396225 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e5b59a4055a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-396225
--> 212fc249f0d
Successfully tagged localhost/my-image:functional-396225
212fc249f0d6ce3f4c9cce9a1dd21d477146ab8460b1f11fd6058ddd4cb48492
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-396225 image build -t localhost/my-image:functional-396225 testdata/build --alsologtostderr:
I1009 18:11:27.590290   25520 out.go:360] Setting OutFile to fd 1 ...
I1009 18:11:27.590674   25520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:27.590691   25520 out.go:374] Setting ErrFile to fd 2...
I1009 18:11:27.590699   25520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:11:27.591033   25520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
I1009 18:11:27.591914   25520 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:27.592736   25520 config.go:182] Loaded profile config "functional-396225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:11:27.593135   25520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:27.593179   25520 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:27.606602   25520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
I1009 18:11:27.607091   25520 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:27.607600   25520 main.go:141] libmachine: Using API Version  1
I1009 18:11:27.607628   25520 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:27.608001   25520 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:27.608224   25520 main.go:141] libmachine: (functional-396225) Calling .GetState
I1009 18:11:27.610641   25520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:11:27.610682   25520 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:11:27.629271   25520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
I1009 18:11:27.629885   25520 main.go:141] libmachine: () Calling .GetVersion
I1009 18:11:27.630424   25520 main.go:141] libmachine: Using API Version  1
I1009 18:11:27.630444   25520 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:11:27.630799   25520 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:11:27.631012   25520 main.go:141] libmachine: (functional-396225) Calling .DriverName
I1009 18:11:27.631231   25520 ssh_runner.go:195] Run: systemctl --version
I1009 18:11:27.631260   25520 main.go:141] libmachine: (functional-396225) Calling .GetSSHHostname
I1009 18:11:27.635307   25520 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:27.635805   25520 main.go:141] libmachine: (functional-396225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:0b:0d", ip: ""} in network mk-functional-396225: {Iface:virbr1 ExpiryTime:2025-10-09 19:07:48 +0000 UTC Type:0 Mac:52:54:00:c5:0b:0d Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:functional-396225 Clientid:01:52:54:00:c5:0b:0d}
I1009 18:11:27.635832   25520 main.go:141] libmachine: (functional-396225) DBG | domain functional-396225 has defined IP address 192.168.39.199 and MAC address 52:54:00:c5:0b:0d in network mk-functional-396225
I1009 18:11:27.636074   25520 main.go:141] libmachine: (functional-396225) Calling .GetSSHPort
I1009 18:11:27.636264   25520 main.go:141] libmachine: (functional-396225) Calling .GetSSHKeyPath
I1009 18:11:27.636440   25520 main.go:141] libmachine: (functional-396225) Calling .GetSSHUsername
I1009 18:11:27.636599   25520 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/functional-396225/id_rsa Username:docker}
I1009 18:11:27.743098   25520 build_images.go:161] Building image from path: /tmp/build.2917533287.tar
I1009 18:11:27.743195   25520 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 18:11:27.760965   25520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2917533287.tar
I1009 18:11:27.768613   25520 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2917533287.tar: stat -c "%s %y" /var/lib/minikube/build/build.2917533287.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2917533287.tar': No such file or directory
I1009 18:11:27.768652   25520 ssh_runner.go:362] scp /tmp/build.2917533287.tar --> /var/lib/minikube/build/build.2917533287.tar (3072 bytes)
I1009 18:11:27.819059   25520 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2917533287
I1009 18:11:27.842217   25520 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2917533287 -xf /var/lib/minikube/build/build.2917533287.tar
I1009 18:11:27.859236   25520 crio.go:315] Building image: /var/lib/minikube/build/build.2917533287
I1009 18:11:27.859321   25520 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-396225 /var/lib/minikube/build/build.2917533287 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 18:11:32.162641   25520 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-396225 /var/lib/minikube/build/build.2917533287 --cgroup-manager=cgroupfs: (4.303288386s)
I1009 18:11:32.162715   25520 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2917533287
I1009 18:11:32.180361   25520 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2917533287.tar
I1009 18:11:32.192976   25520 build_images.go:217] Built localhost/my-image:functional-396225 from /tmp/build.2917533287.tar
I1009 18:11:32.193020   25520 build_images.go:133] succeeded building to: functional-396225
I1009 18:11:32.193026   25520 build_images.go:134] failed building to: 
I1009 18:11:32.193063   25520 main.go:141] libmachine: Making call to close driver server
I1009 18:11:32.193079   25520 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:32.193396   25520 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:32.193416   25520 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:32.193424   25520 main.go:141] libmachine: Making call to close driver server
I1009 18:11:32.193431   25520 main.go:141] libmachine: (functional-396225) Calling .Close
I1009 18:11:32.193397   25520 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
I1009 18:11:32.193664   25520 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:11:32.193688   25520 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:11:32.193666   25520 main.go:141] libmachine: (functional-396225) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.766414414s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-396225
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image load --daemon kicbase/echo-server:functional-396225 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 image load --daemon kicbase/echo-server:functional-396225 --alsologtostderr: (1.112082319s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image load --daemon kicbase/echo-server:functional-396225 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-396225
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image load --daemon kicbase/echo-server:functional-396225 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image save kicbase/echo-server:functional-396225 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service list -o json
functional_test.go:1504: Took "328.584563ms" to run "out/minikube-linux-amd64 -p functional-396225 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.199:31252
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.199:31252
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-396225 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.36924461s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "365.447262ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.614529ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "324.131177ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.03405ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdany-port1705938708/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760033463293981178" to /tmp/TestFunctionalparallelMountCmdany-port1705938708/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760033463293981178" to /tmp/TestFunctionalparallelMountCmdany-port1705938708/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760033463293981178" to /tmp/TestFunctionalparallelMountCmdany-port1705938708/001/test-1760033463293981178
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.951344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:11:03.526367   15263 retry.go:31] will retry after 427.225609ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:11 test-1760033463293981178
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh cat /mount-9p/test-1760033463293981178
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-396225 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3222faa0-5528-4e53-8e85-6fbdffc668df] Pending
helpers_test.go:352: "busybox-mount" [3222faa0-5528-4e53-8e85-6fbdffc668df] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3222faa0-5528-4e53-8e85-6fbdffc668df] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3222faa0-5528-4e53-8e85-6fbdffc668df] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.003883116s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-396225 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdany-port1705938708/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-396225
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 image save --daemon kicbase/echo-server:functional-396225 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-396225
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdspecific-port889014368/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.690369ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:11:22.399916   15263 retry.go:31] will retry after 555.810664ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdspecific-port889014368/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "sudo umount -f /mount-9p": exit status 1 (217.76563ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-396225 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdspecific-port889014368/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T" /mount1: exit status 1 (226.39183ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:11:24.187873   15263 retry.go:31] will retry after 638.047083ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-396225 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-396225 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-396225 /tmp/TestFunctionalparallelMountCmdVerifyCleanup943636524/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-396225
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-396225
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-396225
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:11:52.758573   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:13:14.683248   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m18.427436428s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 kubectl -- rollout status deployment/busybox: (4.684205453s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-4q7wv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-c9p85 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-hxz8l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-4q7wv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-c9p85 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-hxz8l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-4q7wv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-c9p85 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-hxz8l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-4q7wv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-4q7wv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-c9p85 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-c9p85 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-hxz8l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 kubectl -- exec busybox-7b57f96db7-hxz8l -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node add --alsologtostderr -v 5
E1009 18:15:30.817709   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:50.957495   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:50.963943   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:50.975582   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:50.997076   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:51.038570   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:51.120096   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:51.281679   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:51.603284   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:15:52.245369   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 node add --alsologtostderr -v 5: (46.403379746s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
E1009 18:15:53.527097   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-363252 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp testdata/cp-test.txt ha-363252:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2179162168/001/cp-test_ha-363252.txt
E1009 18:15:56.088652   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252:/home/docker/cp-test.txt ha-363252-m02:/home/docker/cp-test_ha-363252_ha-363252-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test_ha-363252_ha-363252-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252:/home/docker/cp-test.txt ha-363252-m03:/home/docker/cp-test_ha-363252_ha-363252-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test_ha-363252_ha-363252-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252:/home/docker/cp-test.txt ha-363252-m04:/home/docker/cp-test_ha-363252_ha-363252-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test.txt"
E1009 18:15:58.524776   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test_ha-363252_ha-363252-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp testdata/cp-test.txt ha-363252-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2179162168/001/cp-test_ha-363252-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m02:/home/docker/cp-test.txt ha-363252:/home/docker/cp-test_ha-363252-m02_ha-363252.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test_ha-363252-m02_ha-363252.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m02:/home/docker/cp-test.txt ha-363252-m03:/home/docker/cp-test_ha-363252-m02_ha-363252-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test_ha-363252-m02_ha-363252-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m02:/home/docker/cp-test.txt ha-363252-m04:/home/docker/cp-test_ha-363252-m02_ha-363252-m04.txt
E1009 18:16:01.211184   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test_ha-363252-m02_ha-363252-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp testdata/cp-test.txt ha-363252-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2179162168/001/cp-test_ha-363252-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m03:/home/docker/cp-test.txt ha-363252:/home/docker/cp-test_ha-363252-m03_ha-363252.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test_ha-363252-m03_ha-363252.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m03:/home/docker/cp-test.txt ha-363252-m02:/home/docker/cp-test_ha-363252-m03_ha-363252-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test_ha-363252-m03_ha-363252-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m03:/home/docker/cp-test.txt ha-363252-m04:/home/docker/cp-test_ha-363252-m03_ha-363252-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test_ha-363252-m03_ha-363252-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp testdata/cp-test.txt ha-363252-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2179162168/001/cp-test_ha-363252-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m04:/home/docker/cp-test.txt ha-363252:/home/docker/cp-test_ha-363252-m04_ha-363252.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252 "sudo cat /home/docker/cp-test_ha-363252-m04_ha-363252.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m04:/home/docker/cp-test.txt ha-363252-m02:/home/docker/cp-test_ha-363252-m04_ha-363252-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m02 "sudo cat /home/docker/cp-test_ha-363252-m04_ha-363252-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 cp ha-363252-m04:/home/docker/cp-test.txt ha-363252-m03:/home/docker/cp-test_ha-363252-m04_ha-363252-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 ssh -n ha-363252-m03 "sudo cat /home/docker/cp-test_ha-363252-m04_ha-363252-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node stop m02 --alsologtostderr -v 5
E1009 18:16:11.453502   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:16:31.935634   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:17:12.897227   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 node stop m02 --alsologtostderr -v 5: (1m26.015491919s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5: exit status 7 (686.077611ms)

                                                
                                                
-- stdout --
	ha-363252
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-363252-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363252-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-363252-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:17:34.107784   30213 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:17:34.108076   30213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:17:34.108089   30213 out.go:374] Setting ErrFile to fd 2...
	I1009 18:17:34.108096   30213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:17:34.108336   30213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:17:34.108564   30213 out.go:368] Setting JSON to false
	I1009 18:17:34.108599   30213 mustload.go:65] Loading cluster: ha-363252
	I1009 18:17:34.108709   30213 notify.go:220] Checking for updates...
	I1009 18:17:34.109111   30213 config.go:182] Loaded profile config "ha-363252": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:17:34.109128   30213 status.go:174] checking status of ha-363252 ...
	I1009 18:17:34.109603   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.109644   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.132091   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45013
	I1009 18:17:34.132740   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.133422   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.133447   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.133903   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.134164   30213 main.go:141] libmachine: (ha-363252) Calling .GetState
	I1009 18:17:34.136607   30213 status.go:371] ha-363252 host status = "Running" (err=<nil>)
	I1009 18:17:34.136626   30213 host.go:66] Checking if "ha-363252" exists ...
	I1009 18:17:34.136958   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.137001   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.150624   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I1009 18:17:34.151057   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.151547   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.151579   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.151950   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.152205   30213 main.go:141] libmachine: (ha-363252) Calling .GetIP
	I1009 18:17:34.155639   30213 main.go:141] libmachine: (ha-363252) DBG | domain ha-363252 has defined MAC address 52:54:00:60:d0:51 in network mk-ha-363252
	I1009 18:17:34.156186   30213 main.go:141] libmachine: (ha-363252) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d0:51", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:11:54 +0000 UTC Type:0 Mac:52:54:00:60:d0:51 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-363252 Clientid:01:52:54:00:60:d0:51}
	I1009 18:17:34.156211   30213 main.go:141] libmachine: (ha-363252) DBG | domain ha-363252 has defined IP address 192.168.39.17 and MAC address 52:54:00:60:d0:51 in network mk-ha-363252
	I1009 18:17:34.156411   30213 host.go:66] Checking if "ha-363252" exists ...
	I1009 18:17:34.156721   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.156784   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.171057   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I1009 18:17:34.171636   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.172253   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.172282   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.172611   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.172815   30213 main.go:141] libmachine: (ha-363252) Calling .DriverName
	I1009 18:17:34.173101   30213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:17:34.173129   30213 main.go:141] libmachine: (ha-363252) Calling .GetSSHHostname
	I1009 18:17:34.176608   30213 main.go:141] libmachine: (ha-363252) DBG | domain ha-363252 has defined MAC address 52:54:00:60:d0:51 in network mk-ha-363252
	I1009 18:17:34.177202   30213 main.go:141] libmachine: (ha-363252) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:d0:51", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:11:54 +0000 UTC Type:0 Mac:52:54:00:60:d0:51 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-363252 Clientid:01:52:54:00:60:d0:51}
	I1009 18:17:34.177225   30213 main.go:141] libmachine: (ha-363252) DBG | domain ha-363252 has defined IP address 192.168.39.17 and MAC address 52:54:00:60:d0:51 in network mk-ha-363252
	I1009 18:17:34.177449   30213 main.go:141] libmachine: (ha-363252) Calling .GetSSHPort
	I1009 18:17:34.177648   30213 main.go:141] libmachine: (ha-363252) Calling .GetSSHKeyPath
	I1009 18:17:34.177817   30213 main.go:141] libmachine: (ha-363252) Calling .GetSSHUsername
	I1009 18:17:34.177974   30213 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/ha-363252/id_rsa Username:docker}
	I1009 18:17:34.270769   30213 ssh_runner.go:195] Run: systemctl --version
	I1009 18:17:34.279061   30213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:17:34.299601   30213 kubeconfig.go:125] found "ha-363252" server: "https://192.168.39.254:8443"
	I1009 18:17:34.299646   30213 api_server.go:166] Checking apiserver status ...
	I1009 18:17:34.299697   30213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:17:34.323754   30213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W1009 18:17:34.337484   30213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:17:34.337549   30213 ssh_runner.go:195] Run: ls
	I1009 18:17:34.342975   30213 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1009 18:17:34.348913   30213 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1009 18:17:34.348942   30213 status.go:463] ha-363252 apiserver status = Running (err=<nil>)
	I1009 18:17:34.348955   30213 status.go:176] ha-363252 status: &{Name:ha-363252 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:17:34.348974   30213 status.go:174] checking status of ha-363252-m02 ...
	I1009 18:17:34.349302   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.349349   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.362891   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
	I1009 18:17:34.363394   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.363892   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.363908   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.364322   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.364573   30213 main.go:141] libmachine: (ha-363252-m02) Calling .GetState
	I1009 18:17:34.366298   30213 status.go:371] ha-363252-m02 host status = "Stopped" (err=<nil>)
	I1009 18:17:34.366311   30213 status.go:384] host is not running, skipping remaining checks
	I1009 18:17:34.366317   30213 status.go:176] ha-363252-m02 status: &{Name:ha-363252-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:17:34.366330   30213 status.go:174] checking status of ha-363252-m03 ...
	I1009 18:17:34.366606   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.366638   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.380659   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
	I1009 18:17:34.381211   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.381773   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.381791   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.382254   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.382460   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetState
	I1009 18:17:34.384708   30213 status.go:371] ha-363252-m03 host status = "Running" (err=<nil>)
	I1009 18:17:34.384742   30213 host.go:66] Checking if "ha-363252-m03" exists ...
	I1009 18:17:34.385071   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.385119   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.399117   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I1009 18:17:34.399605   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.400138   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.400162   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.400520   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.400728   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetIP
	I1009 18:17:34.404761   30213 main.go:141] libmachine: (ha-363252-m03) DBG | domain ha-363252-m03 has defined MAC address 52:54:00:b4:a1:dc in network mk-ha-363252
	I1009 18:17:34.405323   30213 main.go:141] libmachine: (ha-363252-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a1:dc", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:13:52 +0000 UTC Type:0 Mac:52:54:00:b4:a1:dc Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:ha-363252-m03 Clientid:01:52:54:00:b4:a1:dc}
	I1009 18:17:34.405357   30213 main.go:141] libmachine: (ha-363252-m03) DBG | domain ha-363252-m03 has defined IP address 192.168.39.223 and MAC address 52:54:00:b4:a1:dc in network mk-ha-363252
	I1009 18:17:34.405561   30213 host.go:66] Checking if "ha-363252-m03" exists ...
	I1009 18:17:34.405990   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.406073   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.419922   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I1009 18:17:34.420499   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.421117   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.421145   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.421555   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.421828   30213 main.go:141] libmachine: (ha-363252-m03) Calling .DriverName
	I1009 18:17:34.422071   30213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:17:34.422098   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetSSHHostname
	I1009 18:17:34.425845   30213 main.go:141] libmachine: (ha-363252-m03) DBG | domain ha-363252-m03 has defined MAC address 52:54:00:b4:a1:dc in network mk-ha-363252
	I1009 18:17:34.426443   30213 main.go:141] libmachine: (ha-363252-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a1:dc", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:13:52 +0000 UTC Type:0 Mac:52:54:00:b4:a1:dc Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:ha-363252-m03 Clientid:01:52:54:00:b4:a1:dc}
	I1009 18:17:34.426474   30213 main.go:141] libmachine: (ha-363252-m03) DBG | domain ha-363252-m03 has defined IP address 192.168.39.223 and MAC address 52:54:00:b4:a1:dc in network mk-ha-363252
	I1009 18:17:34.426575   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetSSHPort
	I1009 18:17:34.426783   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetSSHKeyPath
	I1009 18:17:34.426987   30213 main.go:141] libmachine: (ha-363252-m03) Calling .GetSSHUsername
	I1009 18:17:34.427174   30213 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/ha-363252-m03/id_rsa Username:docker}
	I1009 18:17:34.509323   30213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:17:34.529365   30213 kubeconfig.go:125] found "ha-363252" server: "https://192.168.39.254:8443"
	I1009 18:17:34.529396   30213 api_server.go:166] Checking apiserver status ...
	I1009 18:17:34.529431   30213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:17:34.551635   30213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup
	W1009 18:17:34.563936   30213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:17:34.563999   30213 ssh_runner.go:195] Run: ls
	I1009 18:17:34.571910   30213 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1009 18:17:34.577287   30213 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1009 18:17:34.577319   30213 status.go:463] ha-363252-m03 apiserver status = Running (err=<nil>)
	I1009 18:17:34.577330   30213 status.go:176] ha-363252-m03 status: &{Name:ha-363252-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:17:34.577349   30213 status.go:174] checking status of ha-363252-m04 ...
	I1009 18:17:34.577812   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.577853   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.592117   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
	I1009 18:17:34.592559   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.592974   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.592993   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.593345   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.593565   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetState
	I1009 18:17:34.595390   30213 status.go:371] ha-363252-m04 host status = "Running" (err=<nil>)
	I1009 18:17:34.595405   30213 host.go:66] Checking if "ha-363252-m04" exists ...
	I1009 18:17:34.595690   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.595723   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.610123   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I1009 18:17:34.610702   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.611293   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.611321   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.611738   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.611965   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetIP
	I1009 18:17:34.615317   30213 main.go:141] libmachine: (ha-363252-m04) DBG | domain ha-363252-m04 has defined MAC address 52:54:00:23:7b:44 in network mk-ha-363252
	I1009 18:17:34.615842   30213 main.go:141] libmachine: (ha-363252-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:44", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:15:22 +0000 UTC Type:0 Mac:52:54:00:23:7b:44 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-363252-m04 Clientid:01:52:54:00:23:7b:44}
	I1009 18:17:34.615875   30213 main.go:141] libmachine: (ha-363252-m04) DBG | domain ha-363252-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:23:7b:44 in network mk-ha-363252
	I1009 18:17:34.616098   30213 host.go:66] Checking if "ha-363252-m04" exists ...
	I1009 18:17:34.616381   30213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:17:34.616417   30213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:17:34.630582   30213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I1009 18:17:34.631301   30213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:17:34.631881   30213 main.go:141] libmachine: Using API Version  1
	I1009 18:17:34.631908   30213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:17:34.632239   30213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:17:34.632440   30213 main.go:141] libmachine: (ha-363252-m04) Calling .DriverName
	I1009 18:17:34.632633   30213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:17:34.632650   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetSSHHostname
	I1009 18:17:34.635789   30213 main.go:141] libmachine: (ha-363252-m04) DBG | domain ha-363252-m04 has defined MAC address 52:54:00:23:7b:44 in network mk-ha-363252
	I1009 18:17:34.636367   30213 main.go:141] libmachine: (ha-363252-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:7b:44", ip: ""} in network mk-ha-363252: {Iface:virbr1 ExpiryTime:2025-10-09 19:15:22 +0000 UTC Type:0 Mac:52:54:00:23:7b:44 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-363252-m04 Clientid:01:52:54:00:23:7b:44}
	I1009 18:17:34.636398   30213 main.go:141] libmachine: (ha-363252-m04) DBG | domain ha-363252-m04 has defined IP address 192.168.39.156 and MAC address 52:54:00:23:7b:44 in network mk-ha-363252
	I1009 18:17:34.636642   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetSSHPort
	I1009 18:17:34.636846   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetSSHKeyPath
	I1009 18:17:34.637014   30213 main.go:141] libmachine: (ha-363252-m04) Calling .GetSSHUsername
	I1009 18:17:34.637173   30213 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/ha-363252-m04/id_rsa Username:docker}
	I1009 18:17:34.723331   30213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:17:34.743256   30213 status.go:176] ha-363252-m04 status: &{Name:ha-363252-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 node start m02 --alsologtostderr -v 5: (34.332147043s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5: (1.001477933s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 stop --alsologtostderr -v 5
E1009 18:18:34.820261   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:20:30.819845   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:20:50.957418   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:21:18.662194   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 stop --alsologtostderr -v 5: (4m13.7360371s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 start --wait true --alsologtostderr -v 5: (2m15.352469244s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 node delete m03 --alsologtostderr -v 5: (17.693925387s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (229.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 stop --alsologtostderr -v 5
E1009 18:25:30.817505   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:25:50.957604   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:26:53.887487   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 stop --alsologtostderr -v 5: (3m49.379158608s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5: exit status 7 (100.494242ms)

                                                
                                                
-- stdout --
	ha-363252
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363252-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-363252-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:28:49.593768   34506 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:49.594055   34506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:49.594065   34506 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:49.594070   34506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:49.594274   34506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:28:49.594441   34506 out.go:368] Setting JSON to false
	I1009 18:28:49.594469   34506 mustload.go:65] Loading cluster: ha-363252
	I1009 18:28:49.594558   34506 notify.go:220] Checking for updates...
	I1009 18:28:49.594873   34506 config.go:182] Loaded profile config "ha-363252": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:49.594887   34506 status.go:174] checking status of ha-363252 ...
	I1009 18:28:49.595298   34506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:28:49.595333   34506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:28:49.609553   34506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I1009 18:28:49.610005   34506 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:28:49.610540   34506 main.go:141] libmachine: Using API Version  1
	I1009 18:28:49.610560   34506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:28:49.610992   34506 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:28:49.611222   34506 main.go:141] libmachine: (ha-363252) Calling .GetState
	I1009 18:28:49.613269   34506 status.go:371] ha-363252 host status = "Stopped" (err=<nil>)
	I1009 18:28:49.613287   34506 status.go:384] host is not running, skipping remaining checks
	I1009 18:28:49.613294   34506 status.go:176] ha-363252 status: &{Name:ha-363252 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:28:49.613335   34506 status.go:174] checking status of ha-363252-m02 ...
	I1009 18:28:49.613771   34506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:28:49.613816   34506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:28:49.626901   34506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1009 18:28:49.627340   34506 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:28:49.627844   34506 main.go:141] libmachine: Using API Version  1
	I1009 18:28:49.627874   34506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:28:49.628348   34506 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:28:49.628554   34506 main.go:141] libmachine: (ha-363252-m02) Calling .GetState
	I1009 18:28:49.630858   34506 status.go:371] ha-363252-m02 host status = "Stopped" (err=<nil>)
	I1009 18:28:49.630877   34506 status.go:384] host is not running, skipping remaining checks
	I1009 18:28:49.630883   34506 status.go:176] ha-363252-m02 status: &{Name:ha-363252-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:28:49.630899   34506 status.go:174] checking status of ha-363252-m04 ...
	I1009 18:28:49.631305   34506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:28:49.631348   34506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:28:49.644679   34506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1009 18:28:49.645238   34506 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:28:49.645794   34506 main.go:141] libmachine: Using API Version  1
	I1009 18:28:49.645816   34506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:28:49.646161   34506 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:28:49.646307   34506 main.go:141] libmachine: (ha-363252-m04) Calling .GetState
	I1009 18:28:49.648253   34506 status.go:371] ha-363252-m04 host status = "Stopped" (err=<nil>)
	I1009 18:28:49.648266   34506 status.go:384] host is not running, skipping remaining checks
	I1009 18:28:49.648272   34506 status.go:176] ha-363252-m04 status: &{Name:ha-363252-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (229.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.538329689s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 node add --control-plane --alsologtostderr -v 5
E1009 18:30:30.818246   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:30:50.957685   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-363252 node add --control-plane --alsologtostderr -v 5: (1m11.606226569s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-363252 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-636801 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:32:14.025785   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-636801 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.064785077s)
--- PASS: TestJSONOutput/start/Command (85.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-636801 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-636801 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-636801 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-636801 --output=json --user=testUser: (6.893006239s)
--- PASS: TestJSONOutput/stop/Command (6.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-696059 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-696059 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.731144ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e80d0c52-24b8-405e-b011-c64f3d31a3d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-696059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64a04e70-e9dd-4b7d-86f0-5cf37c7c2a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"ea41fc12-186f-4a8c-be63-415ecdf764f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31309723-55ee-4da1-9192-0c28018204ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig"}}
	{"specversion":"1.0","id":"664190bd-ae82-4858-8e9b-84eecd47774b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube"}}
	{"specversion":"1.0","id":"d98b6670-e2e1-43e3-a0c3-f9c65cc3b080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8c5885e5-be20-4e68-89dd-81782eca4641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"acf28917-0b6b-4ceb-9b7a-54a9cd4b9153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-696059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-696059
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (82.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-219252 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-219252 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.338541922s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-222145 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-222145 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.52512003s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-219252
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-222145
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-222145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-222145
helpers_test.go:175: Cleaning up "first-219252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-219252
--- PASS: TestMinikubeProfile (82.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-712505 --memory=3072 --mount-string /tmp/TestMountStartserial3784175744/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-712505 --memory=3072 --mount-string /tmp/TestMountStartserial3784175744/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.163878265s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-712505 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-712505 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-724807 --memory=3072 --mount-string /tmp/TestMountStartserial3784175744/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-724807 --memory=3072 --mount-string /tmp/TestMountStartserial3784175744/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.265563402s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-712505 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-724807
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-724807: (1.290939513s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-724807
E1009 18:35:30.817800   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-724807: (18.862116844s)
--- PASS: TestMountStart/serial/RestartStopped (19.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724807 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752141 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:35:50.957660   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752141 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.580632457s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-752141 -- rollout status deployment/busybox: (4.451433266s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-cv545 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-q8lmj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-cv545 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-q8lmj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-cv545 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-q8lmj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-cv545 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-cv545 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-q8lmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752141 -- exec busybox-7b57f96db7-q8lmj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-752141 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-752141 -v=5 --alsologtostderr: (46.02672683s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-752141 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp testdata/cp-test.txt multinode-752141:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4270919058/001/cp-test_multinode-752141.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141:/home/docker/cp-test.txt multinode-752141-m02:/home/docker/cp-test_multinode-752141_multinode-752141-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test_multinode-752141_multinode-752141-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141:/home/docker/cp-test.txt multinode-752141-m03:/home/docker/cp-test_multinode-752141_multinode-752141-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test_multinode-752141_multinode-752141-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp testdata/cp-test.txt multinode-752141-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4270919058/001/cp-test_multinode-752141-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m02:/home/docker/cp-test.txt multinode-752141:/home/docker/cp-test_multinode-752141-m02_multinode-752141.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test_multinode-752141-m02_multinode-752141.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m02:/home/docker/cp-test.txt multinode-752141-m03:/home/docker/cp-test_multinode-752141-m02_multinode-752141-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test_multinode-752141-m02_multinode-752141-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp testdata/cp-test.txt multinode-752141-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4270919058/001/cp-test_multinode-752141-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m03:/home/docker/cp-test.txt multinode-752141:/home/docker/cp-test_multinode-752141-m03_multinode-752141.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141 "sudo cat /home/docker/cp-test_multinode-752141-m03_multinode-752141.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 cp multinode-752141-m03:/home/docker/cp-test.txt multinode-752141-m02:/home/docker/cp-test_multinode-752141-m03_multinode-752141-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 ssh -n multinode-752141-m02 "sudo cat /home/docker/cp-test_multinode-752141-m03_multinode-752141-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-752141 node stop m03: (1.570918926s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752141 status: exit status 7 (432.694877ms)

                                                
                                                
-- stdout --
	multinode-752141
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-752141-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-752141-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr: exit status 7 (440.846049ms)

                                                
                                                
-- stdout --
	multinode-752141
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-752141-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-752141-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:38:30.581176   42213 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:38:30.581407   42213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:38:30.581415   42213 out.go:374] Setting ErrFile to fd 2...
	I1009 18:38:30.581420   42213 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:38:30.581635   42213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:38:30.581796   42213 out.go:368] Setting JSON to false
	I1009 18:38:30.581829   42213 mustload.go:65] Loading cluster: multinode-752141
	I1009 18:38:30.581930   42213 notify.go:220] Checking for updates...
	I1009 18:38:30.582197   42213 config.go:182] Loaded profile config "multinode-752141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:38:30.582213   42213 status.go:174] checking status of multinode-752141 ...
	I1009 18:38:30.582705   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.582747   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.597360   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I1009 18:38:30.597858   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.598588   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.598613   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.599126   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.599310   42213 main.go:141] libmachine: (multinode-752141) Calling .GetState
	I1009 18:38:30.601316   42213 status.go:371] multinode-752141 host status = "Running" (err=<nil>)
	I1009 18:38:30.601331   42213 host.go:66] Checking if "multinode-752141" exists ...
	I1009 18:38:30.601644   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.601683   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.615676   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I1009 18:38:30.616102   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.616510   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.616534   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.616892   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.617106   42213 main.go:141] libmachine: (multinode-752141) Calling .GetIP
	I1009 18:38:30.619946   42213 main.go:141] libmachine: (multinode-752141) DBG | domain multinode-752141 has defined MAC address 52:54:00:25:c8:70 in network mk-multinode-752141
	I1009 18:38:30.620351   42213 main.go:141] libmachine: (multinode-752141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:c8:70", ip: ""} in network mk-multinode-752141: {Iface:virbr1 ExpiryTime:2025-10-09 19:36:02 +0000 UTC Type:0 Mac:52:54:00:25:c8:70 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-752141 Clientid:01:52:54:00:25:c8:70}
	I1009 18:38:30.620482   42213 main.go:141] libmachine: (multinode-752141) DBG | domain multinode-752141 has defined IP address 192.168.39.109 and MAC address 52:54:00:25:c8:70 in network mk-multinode-752141
	I1009 18:38:30.620624   42213 host.go:66] Checking if "multinode-752141" exists ...
	I1009 18:38:30.620927   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.620962   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.635168   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I1009 18:38:30.635675   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.636126   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.636145   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.636480   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.636734   42213 main.go:141] libmachine: (multinode-752141) Calling .DriverName
	I1009 18:38:30.636991   42213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:38:30.637012   42213 main.go:141] libmachine: (multinode-752141) Calling .GetSSHHostname
	I1009 18:38:30.640436   42213 main.go:141] libmachine: (multinode-752141) DBG | domain multinode-752141 has defined MAC address 52:54:00:25:c8:70 in network mk-multinode-752141
	I1009 18:38:30.640886   42213 main.go:141] libmachine: (multinode-752141) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:c8:70", ip: ""} in network mk-multinode-752141: {Iface:virbr1 ExpiryTime:2025-10-09 19:36:02 +0000 UTC Type:0 Mac:52:54:00:25:c8:70 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-752141 Clientid:01:52:54:00:25:c8:70}
	I1009 18:38:30.640915   42213 main.go:141] libmachine: (multinode-752141) DBG | domain multinode-752141 has defined IP address 192.168.39.109 and MAC address 52:54:00:25:c8:70 in network mk-multinode-752141
	I1009 18:38:30.641092   42213 main.go:141] libmachine: (multinode-752141) Calling .GetSSHPort
	I1009 18:38:30.641277   42213 main.go:141] libmachine: (multinode-752141) Calling .GetSSHKeyPath
	I1009 18:38:30.641415   42213 main.go:141] libmachine: (multinode-752141) Calling .GetSSHUsername
	I1009 18:38:30.641565   42213 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/multinode-752141/id_rsa Username:docker}
	I1009 18:38:30.724501   42213 ssh_runner.go:195] Run: systemctl --version
	I1009 18:38:30.731642   42213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:38:30.751304   42213 kubeconfig.go:125] found "multinode-752141" server: "https://192.168.39.109:8443"
	I1009 18:38:30.751352   42213 api_server.go:166] Checking apiserver status ...
	I1009 18:38:30.751393   42213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:38:30.772946   42213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	W1009 18:38:30.785706   42213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:38:30.785768   42213 ssh_runner.go:195] Run: ls
	I1009 18:38:30.791897   42213 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I1009 18:38:30.798631   42213 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I1009 18:38:30.798663   42213 status.go:463] multinode-752141 apiserver status = Running (err=<nil>)
	I1009 18:38:30.798673   42213 status.go:176] multinode-752141 status: &{Name:multinode-752141 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:38:30.798694   42213 status.go:174] checking status of multinode-752141-m02 ...
	I1009 18:38:30.799006   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.799064   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.813359   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1009 18:38:30.813925   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.814535   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.814557   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.814941   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.815146   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetState
	I1009 18:38:30.816882   42213 status.go:371] multinode-752141-m02 host status = "Running" (err=<nil>)
	I1009 18:38:30.816896   42213 host.go:66] Checking if "multinode-752141-m02" exists ...
	I1009 18:38:30.817212   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.817249   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.831900   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42913
	I1009 18:38:30.832355   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.832842   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.832869   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.833291   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.833501   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetIP
	I1009 18:38:30.837120   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | domain multinode-752141-m02 has defined MAC address 52:54:00:50:53:12 in network mk-multinode-752141
	I1009 18:38:30.837817   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:53:12", ip: ""} in network mk-multinode-752141: {Iface:virbr1 ExpiryTime:2025-10-09 19:36:55 +0000 UTC Type:0 Mac:52:54:00:50:53:12 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-752141-m02 Clientid:01:52:54:00:50:53:12}
	I1009 18:38:30.837881   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | domain multinode-752141-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:50:53:12 in network mk-multinode-752141
	I1009 18:38:30.838148   42213 host.go:66] Checking if "multinode-752141-m02" exists ...
	I1009 18:38:30.838486   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.838525   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.853011   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I1009 18:38:30.853564   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.854132   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.854146   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.854459   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.854649   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .DriverName
	I1009 18:38:30.854840   42213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:38:30.854863   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetSSHHostname
	I1009 18:38:30.858522   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | domain multinode-752141-m02 has defined MAC address 52:54:00:50:53:12 in network mk-multinode-752141
	I1009 18:38:30.859084   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:53:12", ip: ""} in network mk-multinode-752141: {Iface:virbr1 ExpiryTime:2025-10-09 19:36:55 +0000 UTC Type:0 Mac:52:54:00:50:53:12 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-752141-m02 Clientid:01:52:54:00:50:53:12}
	I1009 18:38:30.859138   42213 main.go:141] libmachine: (multinode-752141-m02) DBG | domain multinode-752141-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:50:53:12 in network mk-multinode-752141
	I1009 18:38:30.859338   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetSSHPort
	I1009 18:38:30.859586   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetSSHKeyPath
	I1009 18:38:30.859754   42213 main.go:141] libmachine: (multinode-752141-m02) Calling .GetSSHUsername
	I1009 18:38:30.859949   42213 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-11352/.minikube/machines/multinode-752141-m02/id_rsa Username:docker}
	I1009 18:38:30.940180   42213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:38:30.956581   42213 status.go:176] multinode-752141-m02 status: &{Name:multinode-752141-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:38:30.956648   42213 status.go:174] checking status of multinode-752141-m03 ...
	I1009 18:38:30.957009   42213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:38:30.957072   42213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:38:30.971540   42213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I1009 18:38:30.972025   42213 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:38:30.972473   42213 main.go:141] libmachine: Using API Version  1
	I1009 18:38:30.972498   42213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:38:30.972865   42213 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:38:30.973059   42213 main.go:141] libmachine: (multinode-752141-m03) Calling .GetState
	I1009 18:38:30.974943   42213 status.go:371] multinode-752141-m03 host status = "Stopped" (err=<nil>)
	I1009 18:38:30.974975   42213 status.go:384] host is not running, skipping remaining checks
	I1009 18:38:30.974983   42213 status.go:176] multinode-752141-m03 status: &{Name:multinode-752141-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-752141 node start m03 -v=5 --alsologtostderr: (37.154936897s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752141
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-752141
E1009 18:40:30.826562   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:40:50.956811   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-752141: (2m39.338231519s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752141 --wait=true -v=5 --alsologtostderr
E1009 18:43:33.889251   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752141 --wait=true -v=5 --alsologtostderr: (2m39.33127323s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752141
--- PASS: TestMultiNode/serial/RestartKeepsNodes (318.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-752141 node delete m03: (2.321439005s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 stop
E1009 18:45:30.826667   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:50.957080   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-752141 stop: (2m51.947345853s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752141 status: exit status 7 (82.480627ms)

                                                
                                                
-- stdout --
	multinode-752141
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-752141-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr: exit status 7 (81.617798ms)

                                                
                                                
-- stdout --
	multinode-752141
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-752141-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:47:22.533408   45055 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:47:22.533710   45055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:22.533721   45055 out.go:374] Setting ErrFile to fd 2...
	I1009 18:47:22.533728   45055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:22.533958   45055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:47:22.534155   45055 out.go:368] Setting JSON to false
	I1009 18:47:22.534189   45055 mustload.go:65] Loading cluster: multinode-752141
	I1009 18:47:22.534296   45055 notify.go:220] Checking for updates...
	I1009 18:47:22.534580   45055 config.go:182] Loaded profile config "multinode-752141": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:47:22.534596   45055 status.go:174] checking status of multinode-752141 ...
	I1009 18:47:22.535030   45055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:47:22.535100   45055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:47:22.549275   45055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I1009 18:47:22.549780   45055 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:47:22.550322   45055 main.go:141] libmachine: Using API Version  1
	I1009 18:47:22.550342   45055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:47:22.550707   45055 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:47:22.550922   45055 main.go:141] libmachine: (multinode-752141) Calling .GetState
	I1009 18:47:22.552822   45055 status.go:371] multinode-752141 host status = "Stopped" (err=<nil>)
	I1009 18:47:22.552839   45055 status.go:384] host is not running, skipping remaining checks
	I1009 18:47:22.552846   45055 status.go:176] multinode-752141 status: &{Name:multinode-752141 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:47:22.552892   45055 status.go:174] checking status of multinode-752141-m02 ...
	I1009 18:47:22.553217   45055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:47:22.553252   45055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:47:22.566730   45055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I1009 18:47:22.567173   45055 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:47:22.567551   45055 main.go:141] libmachine: Using API Version  1
	I1009 18:47:22.567571   45055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:47:22.567899   45055 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:47:22.568106   45055 main.go:141] libmachine: (multinode-752141-m02) Calling .GetState
	I1009 18:47:22.569946   45055 status.go:371] multinode-752141-m02 host status = "Stopped" (err=<nil>)
	I1009 18:47:22.569960   45055 status.go:384] host is not running, skipping remaining checks
	I1009 18:47:22.569967   45055 status.go:176] multinode-752141-m02 status: &{Name:multinode-752141-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752141 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:48:54.027743   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752141 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m58.414762587s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752141 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752141
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752141-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-752141-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.472822ms)

                                                
                                                
-- stdout --
	* [multinode-752141-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-752141-m02' is duplicated with machine name 'multinode-752141-m02' in profile 'multinode-752141'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752141-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752141-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.459172468s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-752141
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-752141: exit status 80 (238.705503ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-752141 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-752141-m03 already exists in multinode-752141-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-752141-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.67s)

                                                
                                    
x
+
TestScheduledStopUnix (109.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-755023 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-755023 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.180963086s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-755023 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-755023 -n scheduled-stop-755023
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-755023 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 18:53:27.320233   15263 retry.go:31] will retry after 90.686µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.321411   15263 retry.go:31] will retry after 205.971µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.322545   15263 retry.go:31] will retry after 278.307µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.323684   15263 retry.go:31] will retry after 193.479µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.324815   15263 retry.go:31] will retry after 628.251µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.325934   15263 retry.go:31] will retry after 921.271µs: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.327078   15263 retry.go:31] will retry after 1.03173ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.328234   15263 retry.go:31] will retry after 1.683711ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.330442   15263 retry.go:31] will retry after 2.475333ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.333729   15263 retry.go:31] will retry after 2.179692ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.336988   15263 retry.go:31] will retry after 8.122967ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.346211   15263 retry.go:31] will retry after 8.57278ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.355553   15263 retry.go:31] will retry after 17.131605ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.373819   15263 retry.go:31] will retry after 23.032154ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
I1009 18:53:27.397072   15263 retry.go:31] will retry after 36.289637ms: open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/scheduled-stop-755023/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-755023 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-755023 -n scheduled-stop-755023
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-755023
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-755023 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-755023
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-755023: exit status 7 (63.662424ms)

                                                
                                                
-- stdout --
	scheduled-stop-755023
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-755023 -n scheduled-stop-755023
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-755023 -n scheduled-stop-755023: exit status 7 (64.097521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-755023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-755023
--- PASS: TestScheduledStopUnix (109.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (175.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.450943266 start -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:55:30.817852   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.450943266 start -p running-upgrade-852620 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.077230592s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-852620 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.216908584s)
helpers_test.go:175: Cleaning up "running-upgrade-852620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-852620
--- PASS: TestRunningBinaryUpgrade (175.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (505.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.933259266s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-667994
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-667994: (2.160801408s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-667994 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-667994 status --format={{.Host}}: exit status 7 (74.431199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:55:50.957857   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.308389849s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-667994 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (96.641916ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-667994] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-667994
	    minikube start -p kubernetes-upgrade-667994 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6679942 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-667994 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-667994 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6m32.523615457s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-667994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-667994
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-667994: (1.250447317s)
--- PASS: TestKubernetesUpgrade (505.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.347681266 start -p stopped-upgrade-644281 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.347681266 start -p stopped-upgrade-644281 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.642840152s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.347681266 -p stopped-upgrade-644281 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.347681266 -p stopped-upgrade-644281 stop: (3.138137023s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-644281 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.48019124s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-421337 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-421337 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (102.749149ms)

                                                
                                                
-- stdout --
	* [false-421337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:41.843500   49434 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:41.843625   49434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:41.843634   49434 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:41.843639   49434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:41.843874   49434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11352/.minikube/bin
	I1009 18:54:41.844394   49434 out.go:368] Setting JSON to false
	I1009 18:54:41.845263   49434 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5822,"bootTime":1760030260,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:54:41.845356   49434 start.go:141] virtualization: kvm guest
	I1009 18:54:41.847381   49434 out.go:179] * [false-421337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:54:41.848807   49434 notify.go:220] Checking for updates...
	I1009 18:54:41.848824   49434 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:54:41.850334   49434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:54:41.851597   49434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	I1009 18:54:41.853218   49434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	I1009 18:54:41.854501   49434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:54:41.856163   49434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:54:41.857727   49434 config.go:182] Loaded profile config "kubernetes-upgrade-667994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 18:54:41.857822   49434 config.go:182] Loaded profile config "offline-crio-636274": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:41.857931   49434 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:54:41.890812   49434 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:54:41.892210   49434 start.go:305] selected driver: kvm2
	I1009 18:54:41.892230   49434 start.go:925] validating driver "kvm2" against <nil>
	I1009 18:54:41.892241   49434 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:54:41.894524   49434 out.go:203] 
	W1009 18:54:41.895868   49434 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 18:54:41.897109   49434 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-421337 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-421337

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-421337"

                                                
                                                
----------------------- debugLogs end: false-421337 [took: 3.032833152s] --------------------------------
helpers_test.go:175: Cleaning up "false-421337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-421337
--- PASS: TestNetworkPlugins/group/false (3.29s)

                                                
                                    
x
+
TestPause/serial/Start (104.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-706613 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.809174381s)
--- PASS: TestPause/serial/Start (104.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-644281
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-644281: (1.245219274s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:116: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (76.943501ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-156430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11352/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11352/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (55.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-156430 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.18900533s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-156430 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (55.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:145: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:145: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.206719831s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-156430 status -o json
no_kubernetes_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-156430 status -o json: exit status 2 (243.97782ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-156430","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:157: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-156430
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-156430 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (26.863051511s)
--- PASS: TestNoKubernetes/serial/Start (26.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-156430 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-156430 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.136869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:202: (dbg) Done: out/minikube-linux-amd64 profile list: (16.373630869s)
no_kubernetes_test.go:212: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:212: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.079135155s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-156430
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-156430: (2.174962714s)
--- PASS: TestNoKubernetes/serial/Stop (2.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-156430 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-156430 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (30.985087319s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-156430 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-156430 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.742332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:00:13.891154   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.71169749s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:00:50.957113   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.271064932s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-421337 "pgrep -a kubelet"
I1009 19:01:11.101345   15263 config.go:182] Loaded profile config "auto-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8mv79" [206b69a5-a8ce-4f1e-a628-66ad3be4b92d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8mv79" [206b69a5-a8ce-4f1e-a628-66ad3be4b92d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005881782s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.943787245s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bvkps" [1b133f89-6347-407a-9154-34d318cb24fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004241708s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-421337 "pgrep -a kubelet"
I1009 19:02:09.877795   15263 config.go:182] Loaded profile config "kindnet-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (40.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lr7f5" [e0087069-14e2-4125-95a7-7f3fd60d4c4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lr7f5" [e0087069-14e2-4125-95a7-7f3fd60d4c4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 40.005923002s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (40.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-v9tdx" [6d3157f3-e2da-4751-a99f-bc6b6d091a17] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-v9tdx" [6d3157f3-e2da-4751-a99f-bc6b6d091a17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006106951s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-421337 "pgrep -a kubelet"
I1009 19:03:01.066640   15263 config.go:182] Loaded profile config "calico-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4cpln" [e424d4e6-8b1f-4c57-b348-5d9137c9bd5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4cpln" [e424d4e6-8b1f-4c57-b348-5d9137c9bd5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006069877s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.791303939s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.43612885s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.613429382s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-421337 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.151491567s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-421337 "pgrep -a kubelet"
I1009 19:04:14.259765   15263 config.go:182] Loaded profile config "custom-flannel-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-421337 replace --force -f testdata/netcat-deployment.yaml
I1009 19:04:14.989772   15263 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1009 19:04:15.018139   15263 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vp8tm" [873a357a-f023-4882-ac92-4423c0a7cfa6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vp8tm" [873a357a-f023-4882-ac92-4423c0a7cfa6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005256115s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-421337 "pgrep -a kubelet"
I1009 19:04:21.056699   15263 config.go:182] Loaded profile config "enable-default-cni-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8n9hd" [cb775297-28e4-42d5-9ced-3aa16fc2e2ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8n9hd" [cb775297-28e4-42d5-9ced-3aa16fc2e2ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004935455s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-283266 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-283266 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m36.537866125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-253438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-253438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m56.496440627s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-m5blv" [638414e0-3517-498e-956a-e867f2253f44] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005624368s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-421337 "pgrep -a kubelet"
I1009 19:05:08.187750   15263 config.go:182] Loaded profile config "bridge-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b4mvh" [8164f0f8-062b-46fc-8da4-b2f603ae8273] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b4mvh" [8164f0f8-062b-46fc-8da4-b2f603ae8273] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006535408s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-421337 "pgrep -a kubelet"
I1009 19:05:11.269814   15263 config.go:182] Loaded profile config "flannel-421337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-421337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b8skd" [8e76cf84-b4aa-49d9-86ec-202bb1c0eafc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b8skd" [8e76cf84-b4aa-49d9-86ec-202bb1c0eafc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005456517s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-421337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-421337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)
E1009 19:09:14.955918   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.962408   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.973904   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.995865   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.037402   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.119115   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.280422   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.602000   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:16.244075   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:16.721989   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:17.526133   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:20.087607   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.610332   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.616887   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.628530   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.650075   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.691729   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.773638   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:21.936272   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:22.257989   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:22.900343   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:24.182325   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:25.209968   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:26.744548   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-906079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-906079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m33.024020507s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357359 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:05:50.957391   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.347381   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.354021   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.365522   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.386987   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.429514   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.511127   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.672837   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:11.994639   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:12.636123   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:13.917792   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:06:16.479208   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357359 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m7.008824149s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-283266 create -f testdata/busybox.yaml
E1009 19:06:21.600546   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4077835b-8fd7-42da-9e14-d5e5530d7d6e] Pending
helpers_test.go:352: "busybox" [4077835b-8fd7-42da-9e14-d5e5530d7d6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4077835b-8fd7-42da-9e14-d5e5530d7d6e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.088559335s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-283266 exec busybox -- /bin/sh -c "ulimit -n"
E1009 19:06:31.842186   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-283266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-283266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.883894217s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-283266 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-283266 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-283266 --alsologtostderr -v=3: (1m25.289303347s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-253438 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ca6b0d09-be8e-4de9-9dc0-7d03e2d6e0bf] Pending
helpers_test.go:352: "busybox" [ca6b0d09-be8e-4de9-9dc0-7d03e2d6e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ca6b0d09-be8e-4de9-9dc0-7d03e2d6e0bf] Running
E1009 19:06:52.323855   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004468549s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-253438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11905917s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-357359 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-357359 --alsologtostderr -v=3: (10.581082547s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-253438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-253438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-253438 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-253438 --alsologtostderr -v=3: (1m29.852708263s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357359 -n newest-cni-357359
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357359 -n newest-cni-357359: exit status 7 (63.323193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-357359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357359 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:07:03.664451   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.670904   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.682382   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.703830   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.745299   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.826796   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:03.988574   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:04.310343   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:04.951682   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:06.233049   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:08.794620   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357359 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (35.080894222s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357359 -n newest-cni-357359
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-906079 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ea11262-a536-46aa-a006-950d94685524] Pending
helpers_test.go:352: "busybox" [6ea11262-a536-46aa-a006-950d94685524] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1009 19:07:13.916738   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [6ea11262-a536-46aa-a006-950d94685524] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005063339s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-906079 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-906079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-906079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119948034s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-906079 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-906079 --alsologtostderr -v=3
E1009 19:07:24.158473   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:33.285185   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-906079 --alsologtostderr -v=3: (1m30.71732307s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-357359 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-357359 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357359 -n newest-cni-357359
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357359 -n newest-cni-357359: exit status 2 (247.253557ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357359 -n newest-cni-357359
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357359 -n newest-cni-357359: exit status 2 (246.579465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-357359 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357359 -n newest-cni-357359
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357359 -n newest-cni-357359
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-826213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:07:44.639991   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.782462   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.788907   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.800375   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.821835   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.863332   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:54.944831   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:55.106953   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:55.428702   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:56.070017   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:57.351828   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-826213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (53.622241907s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-283266 -n old-k8s-version-283266
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-283266 -n old-k8s-version-283266: exit status 7 (68.332555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-283266 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-283266 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1009 19:07:59.914143   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:08:05.036082   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:08:15.278095   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:08:25.602309   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/kindnet-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-283266 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (43.399541933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-283266 -n old-k8s-version-283266
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-253438 -n no-preload-253438
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-253438 -n no-preload-253438: exit status 7 (76.078511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-253438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-253438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-253438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m0.207768031s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-253438 -n no-preload-253438
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-826213 create -f testdata/busybox.yaml
E1009 19:08:35.760212   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1678644b-10d8-4635-8cd5-3dd286bfccd2] Pending
helpers_test.go:352: "busybox" [1678644b-10d8-4635-8cd5-3dd286bfccd2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1678644b-10d8-4635-8cd5-3dd286bfccd2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004647901s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-826213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9m27g" [ded74ccc-c78d-4d84-adf5-f68c8a83bdd3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9m27g" [ded74ccc-c78d-4d84-adf5-f68c8a83bdd3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.011612766s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-826213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-826213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (84.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-826213 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-826213 --alsologtostderr -v=3: (1m24.638575316s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (84.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079: exit status 7 (81.448824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-906079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-906079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:08:55.207295   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-906079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.428788435s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9m27g" [ded74ccc-c78d-4d84-adf5-f68c8a83bdd3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004128318s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-283266 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-283266 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-283266 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-283266 --alsologtostderr -v=1: (1.103281813s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-283266 -n old-k8s-version-283266
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-283266 -n old-k8s-version-283266: exit status 2 (301.275767ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-283266 -n old-k8s-version-283266
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-283266 -n old-k8s-version-283266: exit status 2 (306.455951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-283266 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-283266 -n old-k8s-version-283266
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-283266 -n old-k8s-version-283266
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2tkd" [245511aa-1afe-447e-abac-c6254c346912] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2tkd" [245511aa-1afe-447e-abac-c6254c346912] Running
E1009 19:09:31.866754   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:35.452217   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.016339163s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2tkd" [245511aa-1afe-447e-abac-c6254c346912] Running
E1009 19:09:42.108986   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006511878s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-253438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-253438 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-253438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-253438 --alsologtostderr -v=1: (1.118597584s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-253438 -n no-preload-253438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-253438 -n no-preload-253438: exit status 2 (279.033929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-253438 -n no-preload-253438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-253438 -n no-preload-253438: exit status 2 (274.200409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-253438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-253438 -n no-preload-253438
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-253438 -n no-preload-253438
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7k9vn" [2699c88c-3897-4e5a-b76f-a7d7234a91de] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7k9vn" [2699c88c-3897-4e5a-b76f-a7d7234a91de] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004372593s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7k9vn" [2699c88c-3897-4e5a-b76f-a7d7234a91de] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004531554s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-906079 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-906079 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-906079 --alsologtostderr -v=1
E1009 19:09:55.934549   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079: exit status 2 (243.482089ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079: exit status 2 (247.895461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-906079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-906079 -n default-k8s-diff-port-906079
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-826213 -n embed-certs-826213
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-826213 -n embed-certs-826213: exit status 7 (64.704559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-826213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-826213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:10:13.586793   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/bridge-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:15.234138   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:18.708969   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/bridge-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:25.476304   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:28.951161   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/bridge-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:30.817977   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/addons-676842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:36.896092   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/custom-flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:38.643501   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/calico-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:43.553479   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/enable-default-cni-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:45.957861   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/flannel-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:49.433428   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/bridge-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:10:50.957316   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/functional-396225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-826213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (44.854306929s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-826213 -n embed-certs-826213
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f94pm" [0d30cdaa-b543-415b-ac31-c406432d82bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f94pm" [0d30cdaa-b543-415b-ac31-c406432d82bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004690353s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f94pm" [0d30cdaa-b543-415b-ac31-c406432d82bc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003589839s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-826213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-826213 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-826213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-826213 -n embed-certs-826213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-826213 -n embed-certs-826213: exit status 2 (263.008184ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-826213 -n embed-certs-826213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-826213 -n embed-certs-826213: exit status 2 (246.885192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-826213 --alsologtostderr -v=1
E1009 19:11:11.347565   15263 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11352/.minikube/profiles/auto-421337/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-826213 -n embed-certs-826213
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-826213 -n embed-certs-826213
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.70s)

                                                
                                    

Test skip (40/325)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.11
266 TestNetworkPlugins/group/cilium 3.54
272 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-676842 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-421337 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-421337

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-421337"

                                                
                                                
----------------------- debugLogs end: kubenet-421337 [took: 2.955132405s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-421337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-421337
--- SKIP: TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-421337 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-421337" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-421337

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-421337" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-421337"

                                                
                                                
----------------------- debugLogs end: cilium-421337 [took: 3.386621266s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-421337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-421337
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-788952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-788952
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard