Test Report: KVM_Linux_crio 21773

                    
                      8990789ccd20605bfce25419a1a009c7a75246f6:2025-10-20:41995
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 161.17
244 TestPreload 158.88
287 TestPause/serial/SecondStartNoReconfiguration 73.29
x
+
TestAddons/parallel/Ingress (161.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-323619 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-323619 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-323619 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1d4b8ae7-8624-40f6-aef7-014cc379dda1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1d4b8ae7-8624-40f6-aef7-014cc379dda1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004577565s
I1020 12:01:20.124056  143131 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-323619 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.418614771s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-323619 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.233
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-323619 -n addons-323619
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 logs -n 25: (1.402888657s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-070035                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-070035 │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │ 20 Oct 25 11:57 UTC │
	│ start   │ --download-only -p binary-mirror-169246 --alsologtostderr --binary-mirror http://127.0.0.1:39247 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-169246 │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │                     │
	│ delete  │ -p binary-mirror-169246                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-169246 │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │ 20 Oct 25 11:57 UTC │
	│ addons  │ disable dashboard -p addons-323619                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │                     │
	│ addons  │ enable dashboard -p addons-323619                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │                     │
	│ start   │ -p addons-323619 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │ 20 Oct 25 12:00 UTC │
	│ addons  │ addons-323619 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:00 UTC │ 20 Oct 25 12:00 UTC │
	│ addons  │ addons-323619 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:00 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-323619                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ ssh     │ addons-323619 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │                     │
	│ addons  │ addons-323619 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ ip      │ addons-323619 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ ssh     │ addons-323619 ssh cat /opt/local-path-provisioner/pvc-46c0c8c7-7629-4c0a-b0ce-cef91ed80b06_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:02 UTC │
	│ addons  │ addons-323619 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ enable headlamp -p addons-323619 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:01 UTC │ 20 Oct 25 12:01 UTC │
	│ addons  │ addons-323619 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:02 UTC │ 20 Oct 25 12:02 UTC │
	│ addons  │ addons-323619 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:02 UTC │ 20 Oct 25 12:02 UTC │
	│ ip      │ addons-323619 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-323619        │ jenkins │ v1.37.0 │ 20 Oct 25 12:03 UTC │ 20 Oct 25 12:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:57:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:57:21.940504  143841 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:57:21.940770  143841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:57:21.940781  143841 out.go:374] Setting ErrFile to fd 2...
	I1020 11:57:21.940785  143841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:57:21.940973  143841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 11:57:21.941519  143841 out.go:368] Setting JSON to false
	I1020 11:57:21.942474  143841 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2377,"bootTime":1760959065,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:57:21.942576  143841 start.go:141] virtualization: kvm guest
	I1020 11:57:21.944612  143841 out.go:179] * [addons-323619] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:57:21.945880  143841 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 11:57:21.945898  143841 notify.go:220] Checking for updates...
	I1020 11:57:21.948034  143841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:57:21.949039  143841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 11:57:21.949967  143841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 11:57:21.950971  143841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 11:57:21.952283  143841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 11:57:21.953635  143841 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:57:21.983085  143841 out.go:179] * Using the kvm2 driver based on user configuration
	I1020 11:57:21.984119  143841 start.go:305] selected driver: kvm2
	I1020 11:57:21.984136  143841 start.go:925] validating driver "kvm2" against <nil>
	I1020 11:57:21.984153  143841 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 11:57:21.984849  143841 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:57:21.984947  143841 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:57:21.998775  143841 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:57:21.998805  143841 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:57:22.013646  143841 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:57:22.013699  143841 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:57:22.013990  143841 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:57:22.014019  143841 cni.go:84] Creating CNI manager for ""
	I1020 11:57:22.014062  143841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 11:57:22.014068  143841 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1020 11:57:22.014117  143841 start.go:349] cluster config:
	{Name:addons-323619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-323619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1020 11:57:22.014225  143841 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:57:22.015945  143841 out.go:179] * Starting "addons-323619" primary control-plane node in "addons-323619" cluster
	I1020 11:57:22.016908  143841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:57:22.016944  143841 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 11:57:22.016955  143841 cache.go:58] Caching tarball of preloaded images
	I1020 11:57:22.017061  143841 preload.go:233] Found /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 11:57:22.017075  143841 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 11:57:22.017394  143841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/config.json ...
	I1020 11:57:22.017436  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/config.json: {Name:mk1dd1ed039a3806c7b5adf5da1875d800b79973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:22.018032  143841 start.go:360] acquireMachinesLock for addons-323619: {Name:mk7379f3db3d78bd88fb45ecf1a2b8c8492f1da9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1020 11:57:22.018487  143841 start.go:364] duration metric: took 433.597µs to acquireMachinesLock for "addons-323619"
	I1020 11:57:22.018518  143841 start.go:93] Provisioning new machine with config: &{Name:addons-323619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-323619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:57:22.018571  143841 start.go:125] createHost starting for "" (driver="kvm2")
	I1020 11:57:22.019881  143841 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1020 11:57:22.020034  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:57:22.020117  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:57:22.033221  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I1020 11:57:22.033806  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:57:22.034472  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:57:22.034493  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:57:22.034887  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:57:22.035105  143841 main.go:141] libmachine: (addons-323619) Calling .GetMachineName
	I1020 11:57:22.035269  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:22.035415  143841 start.go:159] libmachine.API.Create for "addons-323619" (driver="kvm2")
	I1020 11:57:22.035444  143841 client.go:168] LocalClient.Create starting
	I1020 11:57:22.035483  143841 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem
	I1020 11:57:22.184115  143841 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem
	I1020 11:57:22.613481  143841 main.go:141] libmachine: Running pre-create checks...
	I1020 11:57:22.613515  143841 main.go:141] libmachine: (addons-323619) Calling .PreCreateCheck
	I1020 11:57:22.614039  143841 main.go:141] libmachine: (addons-323619) Calling .GetConfigRaw
	I1020 11:57:22.614559  143841 main.go:141] libmachine: Creating machine...
	I1020 11:57:22.614575  143841 main.go:141] libmachine: (addons-323619) Calling .Create
	I1020 11:57:22.614774  143841 main.go:141] libmachine: (addons-323619) creating domain...
	I1020 11:57:22.614792  143841 main.go:141] libmachine: (addons-323619) creating network...
	I1020 11:57:22.616501  143841 main.go:141] libmachine: (addons-323619) DBG | found existing default network
	I1020 11:57:22.616728  143841 main.go:141] libmachine: (addons-323619) DBG | <network>
	I1020 11:57:22.616756  143841 main.go:141] libmachine: (addons-323619) DBG |   <name>default</name>
	I1020 11:57:22.616771  143841 main.go:141] libmachine: (addons-323619) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1020 11:57:22.616829  143841 main.go:141] libmachine: (addons-323619) DBG |   <forward mode='nat'>
	I1020 11:57:22.616846  143841 main.go:141] libmachine: (addons-323619) DBG |     <nat>
	I1020 11:57:22.616858  143841 main.go:141] libmachine: (addons-323619) DBG |       <port start='1024' end='65535'/>
	I1020 11:57:22.616890  143841 main.go:141] libmachine: (addons-323619) DBG |     </nat>
	I1020 11:57:22.616916  143841 main.go:141] libmachine: (addons-323619) DBG |   </forward>
	I1020 11:57:22.616924  143841 main.go:141] libmachine: (addons-323619) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1020 11:57:22.616937  143841 main.go:141] libmachine: (addons-323619) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1020 11:57:22.616968  143841 main.go:141] libmachine: (addons-323619) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1020 11:57:22.616988  143841 main.go:141] libmachine: (addons-323619) DBG |     <dhcp>
	I1020 11:57:22.617016  143841 main.go:141] libmachine: (addons-323619) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1020 11:57:22.617034  143841 main.go:141] libmachine: (addons-323619) DBG |     </dhcp>
	I1020 11:57:22.617051  143841 main.go:141] libmachine: (addons-323619) DBG |   </ip>
	I1020 11:57:22.617061  143841 main.go:141] libmachine: (addons-323619) DBG | </network>
	I1020 11:57:22.617070  143841 main.go:141] libmachine: (addons-323619) DBG | 
	I1020 11:57:22.618187  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:22.618023  143869 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112dc0}
	I1020 11:57:22.618245  143841 main.go:141] libmachine: (addons-323619) DBG | defining private network:
	I1020 11:57:22.618255  143841 main.go:141] libmachine: (addons-323619) DBG | 
	I1020 11:57:22.618260  143841 main.go:141] libmachine: (addons-323619) DBG | <network>
	I1020 11:57:22.618265  143841 main.go:141] libmachine: (addons-323619) DBG |   <name>mk-addons-323619</name>
	I1020 11:57:22.618273  143841 main.go:141] libmachine: (addons-323619) DBG |   <dns enable='no'/>
	I1020 11:57:22.618278  143841 main.go:141] libmachine: (addons-323619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1020 11:57:22.618284  143841 main.go:141] libmachine: (addons-323619) DBG |     <dhcp>
	I1020 11:57:22.618289  143841 main.go:141] libmachine: (addons-323619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1020 11:57:22.618296  143841 main.go:141] libmachine: (addons-323619) DBG |     </dhcp>
	I1020 11:57:22.618300  143841 main.go:141] libmachine: (addons-323619) DBG |   </ip>
	I1020 11:57:22.618305  143841 main.go:141] libmachine: (addons-323619) DBG | </network>
	I1020 11:57:22.618314  143841 main.go:141] libmachine: (addons-323619) DBG | 
	I1020 11:57:22.623927  143841 main.go:141] libmachine: (addons-323619) DBG | creating private network mk-addons-323619 192.168.39.0/24...
	I1020 11:57:22.688124  143841 main.go:141] libmachine: (addons-323619) DBG | private network mk-addons-323619 192.168.39.0/24 created
	I1020 11:57:22.688359  143841 main.go:141] libmachine: (addons-323619) DBG | <network>
	I1020 11:57:22.688377  143841 main.go:141] libmachine: (addons-323619) DBG |   <name>mk-addons-323619</name>
	I1020 11:57:22.688390  143841 main.go:141] libmachine: (addons-323619) setting up store path in /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619 ...
	I1020 11:57:22.688415  143841 main.go:141] libmachine: (addons-323619) DBG |   <uuid>8d29a973-00c7-45b3-8085-a42d0e6ff073</uuid>
	I1020 11:57:22.688433  143841 main.go:141] libmachine: (addons-323619) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1020 11:57:22.688470  143841 main.go:141] libmachine: (addons-323619) building disk image from file:///home/jenkins/minikube-integration/21773-139101/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1020 11:57:22.688481  143841 main.go:141] libmachine: (addons-323619) DBG |   <mac address='52:54:00:05:63:ff'/>
	I1020 11:57:22.688491  143841 main.go:141] libmachine: (addons-323619) DBG |   <dns enable='no'/>
	I1020 11:57:22.688503  143841 main.go:141] libmachine: (addons-323619) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1020 11:57:22.688513  143841 main.go:141] libmachine: (addons-323619) DBG |     <dhcp>
	I1020 11:57:22.688526  143841 main.go:141] libmachine: (addons-323619) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1020 11:57:22.688557  143841 main.go:141] libmachine: (addons-323619) DBG |     </dhcp>
	I1020 11:57:22.688579  143841 main.go:141] libmachine: (addons-323619) DBG |   </ip>
	I1020 11:57:22.688604  143841 main.go:141] libmachine: (addons-323619) DBG | </network>
	I1020 11:57:22.688615  143841 main.go:141] libmachine: (addons-323619) DBG | 
	I1020 11:57:22.688649  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:22.688363  143869 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 11:57:22.688754  143841 main.go:141] libmachine: (addons-323619) Downloading /home/jenkins/minikube-integration/21773-139101/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21773-139101/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1020 11:57:23.002883  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:23.002744  143869 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa...
	I1020 11:57:23.285855  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:23.285678  143869 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/addons-323619.rawdisk...
	I1020 11:57:23.285893  143841 main.go:141] libmachine: (addons-323619) DBG | Writing magic tar header
	I1020 11:57:23.285911  143841 main.go:141] libmachine: (addons-323619) DBG | Writing SSH key tar header
	I1020 11:57:23.285924  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:23.285828  143869 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619 ...
	I1020 11:57:23.285942  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619
	I1020 11:57:23.286062  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619 (perms=drwx------)
	I1020 11:57:23.286089  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21773-139101/.minikube/machines
	I1020 11:57:23.286109  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins/minikube-integration/21773-139101/.minikube/machines (perms=drwxr-xr-x)
	I1020 11:57:23.286190  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 11:57:23.286230  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins/minikube-integration/21773-139101/.minikube (perms=drwxr-xr-x)
	I1020 11:57:23.286244  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21773-139101
	I1020 11:57:23.286255  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins/minikube-integration/21773-139101 (perms=drwxrwxr-x)
	I1020 11:57:23.286268  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1020 11:57:23.286281  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home/jenkins
	I1020 11:57:23.286289  143841 main.go:141] libmachine: (addons-323619) DBG | checking permissions on dir: /home
	I1020 11:57:23.286298  143841 main.go:141] libmachine: (addons-323619) DBG | skipping /home - not owner
	I1020 11:57:23.286335  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1020 11:57:23.286360  143841 main.go:141] libmachine: (addons-323619) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1020 11:57:23.286377  143841 main.go:141] libmachine: (addons-323619) defining domain...
	I1020 11:57:23.287474  143841 main.go:141] libmachine: (addons-323619) defining domain using XML: 
	I1020 11:57:23.287500  143841 main.go:141] libmachine: (addons-323619) <domain type='kvm'>
	I1020 11:57:23.287510  143841 main.go:141] libmachine: (addons-323619)   <name>addons-323619</name>
	I1020 11:57:23.287521  143841 main.go:141] libmachine: (addons-323619)   <memory unit='MiB'>4096</memory>
	I1020 11:57:23.287529  143841 main.go:141] libmachine: (addons-323619)   <vcpu>2</vcpu>
	I1020 11:57:23.287538  143841 main.go:141] libmachine: (addons-323619)   <features>
	I1020 11:57:23.287551  143841 main.go:141] libmachine: (addons-323619)     <acpi/>
	I1020 11:57:23.287560  143841 main.go:141] libmachine: (addons-323619)     <apic/>
	I1020 11:57:23.287572  143841 main.go:141] libmachine: (addons-323619)     <pae/>
	I1020 11:57:23.287581  143841 main.go:141] libmachine: (addons-323619)   </features>
	I1020 11:57:23.287591  143841 main.go:141] libmachine: (addons-323619)   <cpu mode='host-passthrough'>
	I1020 11:57:23.287603  143841 main.go:141] libmachine: (addons-323619)   </cpu>
	I1020 11:57:23.287636  143841 main.go:141] libmachine: (addons-323619)   <os>
	I1020 11:57:23.287659  143841 main.go:141] libmachine: (addons-323619)     <type>hvm</type>
	I1020 11:57:23.287669  143841 main.go:141] libmachine: (addons-323619)     <boot dev='cdrom'/>
	I1020 11:57:23.287679  143841 main.go:141] libmachine: (addons-323619)     <boot dev='hd'/>
	I1020 11:57:23.287703  143841 main.go:141] libmachine: (addons-323619)     <bootmenu enable='no'/>
	I1020 11:57:23.287723  143841 main.go:141] libmachine: (addons-323619)   </os>
	I1020 11:57:23.287736  143841 main.go:141] libmachine: (addons-323619)   <devices>
	I1020 11:57:23.287746  143841 main.go:141] libmachine: (addons-323619)     <disk type='file' device='cdrom'>
	I1020 11:57:23.287772  143841 main.go:141] libmachine: (addons-323619)       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/boot2docker.iso'/>
	I1020 11:57:23.287782  143841 main.go:141] libmachine: (addons-323619)       <target dev='hdc' bus='scsi'/>
	I1020 11:57:23.287790  143841 main.go:141] libmachine: (addons-323619)       <readonly/>
	I1020 11:57:23.287797  143841 main.go:141] libmachine: (addons-323619)     </disk>
	I1020 11:57:23.287826  143841 main.go:141] libmachine: (addons-323619)     <disk type='file' device='disk'>
	I1020 11:57:23.287841  143841 main.go:141] libmachine: (addons-323619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1020 11:57:23.287850  143841 main.go:141] libmachine: (addons-323619)       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/addons-323619.rawdisk'/>
	I1020 11:57:23.287855  143841 main.go:141] libmachine: (addons-323619)       <target dev='hda' bus='virtio'/>
	I1020 11:57:23.287860  143841 main.go:141] libmachine: (addons-323619)     </disk>
	I1020 11:57:23.287864  143841 main.go:141] libmachine: (addons-323619)     <interface type='network'>
	I1020 11:57:23.287872  143841 main.go:141] libmachine: (addons-323619)       <source network='mk-addons-323619'/>
	I1020 11:57:23.287877  143841 main.go:141] libmachine: (addons-323619)       <model type='virtio'/>
	I1020 11:57:23.287882  143841 main.go:141] libmachine: (addons-323619)     </interface>
	I1020 11:57:23.287886  143841 main.go:141] libmachine: (addons-323619)     <interface type='network'>
	I1020 11:57:23.287894  143841 main.go:141] libmachine: (addons-323619)       <source network='default'/>
	I1020 11:57:23.287898  143841 main.go:141] libmachine: (addons-323619)       <model type='virtio'/>
	I1020 11:57:23.287905  143841 main.go:141] libmachine: (addons-323619)     </interface>
	I1020 11:57:23.287909  143841 main.go:141] libmachine: (addons-323619)     <serial type='pty'>
	I1020 11:57:23.287925  143841 main.go:141] libmachine: (addons-323619)       <target port='0'/>
	I1020 11:57:23.287941  143841 main.go:141] libmachine: (addons-323619)     </serial>
	I1020 11:57:23.287949  143841 main.go:141] libmachine: (addons-323619)     <console type='pty'>
	I1020 11:57:23.287958  143841 main.go:141] libmachine: (addons-323619)       <target type='serial' port='0'/>
	I1020 11:57:23.287974  143841 main.go:141] libmachine: (addons-323619)     </console>
	I1020 11:57:23.287990  143841 main.go:141] libmachine: (addons-323619)     <rng model='virtio'>
	I1020 11:57:23.288002  143841 main.go:141] libmachine: (addons-323619)       <backend model='random'>/dev/random</backend>
	I1020 11:57:23.288011  143841 main.go:141] libmachine: (addons-323619)     </rng>
	I1020 11:57:23.288021  143841 main.go:141] libmachine: (addons-323619)   </devices>
	I1020 11:57:23.288028  143841 main.go:141] libmachine: (addons-323619) </domain>
	I1020 11:57:23.288036  143841 main.go:141] libmachine: (addons-323619) 
	I1020 11:57:23.294767  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:c2:b4:65 in network default
	I1020 11:57:23.295317  143841 main.go:141] libmachine: (addons-323619) starting domain...
	I1020 11:57:23.295336  143841 main.go:141] libmachine: (addons-323619) ensuring networks are active...
	I1020 11:57:23.295348  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:23.296093  143841 main.go:141] libmachine: (addons-323619) Ensuring network default is active
	I1020 11:57:23.296453  143841 main.go:141] libmachine: (addons-323619) Ensuring network mk-addons-323619 is active
	I1020 11:57:23.297332  143841 main.go:141] libmachine: (addons-323619) getting domain XML...
	I1020 11:57:23.298305  143841 main.go:141] libmachine: (addons-323619) DBG | starting domain XML:
	I1020 11:57:23.298331  143841 main.go:141] libmachine: (addons-323619) DBG | <domain type='kvm'>
	I1020 11:57:23.298342  143841 main.go:141] libmachine: (addons-323619) DBG |   <name>addons-323619</name>
	I1020 11:57:23.298390  143841 main.go:141] libmachine: (addons-323619) DBG |   <uuid>89313cb4-2045-4427-9e99-779a8a628312</uuid>
	I1020 11:57:23.298434  143841 main.go:141] libmachine: (addons-323619) DBG |   <memory unit='KiB'>4194304</memory>
	I1020 11:57:23.298453  143841 main.go:141] libmachine: (addons-323619) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1020 11:57:23.298468  143841 main.go:141] libmachine: (addons-323619) DBG |   <vcpu placement='static'>2</vcpu>
	I1020 11:57:23.298479  143841 main.go:141] libmachine: (addons-323619) DBG |   <os>
	I1020 11:57:23.298489  143841 main.go:141] libmachine: (addons-323619) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1020 11:57:23.298498  143841 main.go:141] libmachine: (addons-323619) DBG |     <boot dev='cdrom'/>
	I1020 11:57:23.298504  143841 main.go:141] libmachine: (addons-323619) DBG |     <boot dev='hd'/>
	I1020 11:57:23.298518  143841 main.go:141] libmachine: (addons-323619) DBG |     <bootmenu enable='no'/>
	I1020 11:57:23.298546  143841 main.go:141] libmachine: (addons-323619) DBG |   </os>
	I1020 11:57:23.298567  143841 main.go:141] libmachine: (addons-323619) DBG |   <features>
	I1020 11:57:23.298574  143841 main.go:141] libmachine: (addons-323619) DBG |     <acpi/>
	I1020 11:57:23.298587  143841 main.go:141] libmachine: (addons-323619) DBG |     <apic/>
	I1020 11:57:23.298595  143841 main.go:141] libmachine: (addons-323619) DBG |     <pae/>
	I1020 11:57:23.298599  143841 main.go:141] libmachine: (addons-323619) DBG |   </features>
	I1020 11:57:23.298610  143841 main.go:141] libmachine: (addons-323619) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1020 11:57:23.298614  143841 main.go:141] libmachine: (addons-323619) DBG |   <clock offset='utc'/>
	I1020 11:57:23.298630  143841 main.go:141] libmachine: (addons-323619) DBG |   <on_poweroff>destroy</on_poweroff>
	I1020 11:57:23.298642  143841 main.go:141] libmachine: (addons-323619) DBG |   <on_reboot>restart</on_reboot>
	I1020 11:57:23.298648  143841 main.go:141] libmachine: (addons-323619) DBG |   <on_crash>destroy</on_crash>
	I1020 11:57:23.298656  143841 main.go:141] libmachine: (addons-323619) DBG |   <devices>
	I1020 11:57:23.298662  143841 main.go:141] libmachine: (addons-323619) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1020 11:57:23.298670  143841 main.go:141] libmachine: (addons-323619) DBG |     <disk type='file' device='cdrom'>
	I1020 11:57:23.298677  143841 main.go:141] libmachine: (addons-323619) DBG |       <driver name='qemu' type='raw'/>
	I1020 11:57:23.298687  143841 main.go:141] libmachine: (addons-323619) DBG |       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/boot2docker.iso'/>
	I1020 11:57:23.298696  143841 main.go:141] libmachine: (addons-323619) DBG |       <target dev='hdc' bus='scsi'/>
	I1020 11:57:23.298705  143841 main.go:141] libmachine: (addons-323619) DBG |       <readonly/>
	I1020 11:57:23.298717  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1020 11:57:23.298734  143841 main.go:141] libmachine: (addons-323619) DBG |     </disk>
	I1020 11:57:23.298768  143841 main.go:141] libmachine: (addons-323619) DBG |     <disk type='file' device='disk'>
	I1020 11:57:23.298794  143841 main.go:141] libmachine: (addons-323619) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1020 11:57:23.298822  143841 main.go:141] libmachine: (addons-323619) DBG |       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/addons-323619.rawdisk'/>
	I1020 11:57:23.298835  143841 main.go:141] libmachine: (addons-323619) DBG |       <target dev='hda' bus='virtio'/>
	I1020 11:57:23.298851  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1020 11:57:23.298861  143841 main.go:141] libmachine: (addons-323619) DBG |     </disk>
	I1020 11:57:23.298882  143841 main.go:141] libmachine: (addons-323619) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1020 11:57:23.298904  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1020 11:57:23.298914  143841 main.go:141] libmachine: (addons-323619) DBG |     </controller>
	I1020 11:57:23.298924  143841 main.go:141] libmachine: (addons-323619) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1020 11:57:23.298937  143841 main.go:141] libmachine: (addons-323619) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1020 11:57:23.298950  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1020 11:57:23.298961  143841 main.go:141] libmachine: (addons-323619) DBG |     </controller>
	I1020 11:57:23.298971  143841 main.go:141] libmachine: (addons-323619) DBG |     <interface type='network'>
	I1020 11:57:23.298981  143841 main.go:141] libmachine: (addons-323619) DBG |       <mac address='52:54:00:71:f8:e0'/>
	I1020 11:57:23.298990  143841 main.go:141] libmachine: (addons-323619) DBG |       <source network='mk-addons-323619'/>
	I1020 11:57:23.298996  143841 main.go:141] libmachine: (addons-323619) DBG |       <model type='virtio'/>
	I1020 11:57:23.299013  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1020 11:57:23.299019  143841 main.go:141] libmachine: (addons-323619) DBG |     </interface>
	I1020 11:57:23.299024  143841 main.go:141] libmachine: (addons-323619) DBG |     <interface type='network'>
	I1020 11:57:23.299029  143841 main.go:141] libmachine: (addons-323619) DBG |       <mac address='52:54:00:c2:b4:65'/>
	I1020 11:57:23.299034  143841 main.go:141] libmachine: (addons-323619) DBG |       <source network='default'/>
	I1020 11:57:23.299039  143841 main.go:141] libmachine: (addons-323619) DBG |       <model type='virtio'/>
	I1020 11:57:23.299045  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1020 11:57:23.299051  143841 main.go:141] libmachine: (addons-323619) DBG |     </interface>
	I1020 11:57:23.299056  143841 main.go:141] libmachine: (addons-323619) DBG |     <serial type='pty'>
	I1020 11:57:23.299065  143841 main.go:141] libmachine: (addons-323619) DBG |       <target type='isa-serial' port='0'>
	I1020 11:57:23.299073  143841 main.go:141] libmachine: (addons-323619) DBG |         <model name='isa-serial'/>
	I1020 11:57:23.299078  143841 main.go:141] libmachine: (addons-323619) DBG |       </target>
	I1020 11:57:23.299085  143841 main.go:141] libmachine: (addons-323619) DBG |     </serial>
	I1020 11:57:23.299094  143841 main.go:141] libmachine: (addons-323619) DBG |     <console type='pty'>
	I1020 11:57:23.299099  143841 main.go:141] libmachine: (addons-323619) DBG |       <target type='serial' port='0'/>
	I1020 11:57:23.299107  143841 main.go:141] libmachine: (addons-323619) DBG |     </console>
	I1020 11:57:23.299111  143841 main.go:141] libmachine: (addons-323619) DBG |     <input type='mouse' bus='ps2'/>
	I1020 11:57:23.299117  143841 main.go:141] libmachine: (addons-323619) DBG |     <input type='keyboard' bus='ps2'/>
	I1020 11:57:23.299137  143841 main.go:141] libmachine: (addons-323619) DBG |     <audio id='1' type='none'/>
	I1020 11:57:23.299147  143841 main.go:141] libmachine: (addons-323619) DBG |     <memballoon model='virtio'>
	I1020 11:57:23.299153  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1020 11:57:23.299160  143841 main.go:141] libmachine: (addons-323619) DBG |     </memballoon>
	I1020 11:57:23.299164  143841 main.go:141] libmachine: (addons-323619) DBG |     <rng model='virtio'>
	I1020 11:57:23.299170  143841 main.go:141] libmachine: (addons-323619) DBG |       <backend model='random'>/dev/random</backend>
	I1020 11:57:23.299180  143841 main.go:141] libmachine: (addons-323619) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1020 11:57:23.299185  143841 main.go:141] libmachine: (addons-323619) DBG |     </rng>
	I1020 11:57:23.299190  143841 main.go:141] libmachine: (addons-323619) DBG |   </devices>
	I1020 11:57:23.299195  143841 main.go:141] libmachine: (addons-323619) DBG | </domain>
	I1020 11:57:23.299202  143841 main.go:141] libmachine: (addons-323619) DBG | 
	I1020 11:57:24.627240  143841 main.go:141] libmachine: (addons-323619) waiting for domain to start...
	I1020 11:57:24.628539  143841 main.go:141] libmachine: (addons-323619) domain is now running
	I1020 11:57:24.628563  143841 main.go:141] libmachine: (addons-323619) waiting for IP...
	I1020 11:57:24.629377  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:24.629850  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:24.629892  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:24.630125  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:24.630210  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:24.630138  143869 retry.go:31] will retry after 208.952795ms: waiting for domain to come up
	I1020 11:57:24.840836  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:24.841315  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:24.841343  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:24.841654  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:24.841686  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:24.841625  143869 retry.go:31] will retry after 345.95682ms: waiting for domain to come up
	I1020 11:57:25.189364  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:25.189885  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:25.189914  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:25.190158  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:25.190221  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:25.190156  143869 retry.go:31] will retry after 297.545616ms: waiting for domain to come up
	I1020 11:57:25.489730  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:25.490266  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:25.490285  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:25.490603  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:25.490665  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:25.490588  143869 retry.go:31] will retry after 369.305726ms: waiting for domain to come up
	I1020 11:57:25.861330  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:25.862003  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:25.862028  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:25.862445  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:25.862469  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:25.862365  143869 retry.go:31] will retry after 639.944426ms: waiting for domain to come up
	I1020 11:57:26.504374  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:26.504853  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:26.504879  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:26.505223  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:26.505255  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:26.505178  143869 retry.go:31] will retry after 769.093811ms: waiting for domain to come up
	I1020 11:57:27.276149  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:27.276727  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:27.276779  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:27.277068  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:27.277116  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:27.277053  143869 retry.go:31] will retry after 902.54323ms: waiting for domain to come up
	I1020 11:57:28.181316  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:28.181871  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:28.181898  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:28.182135  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:28.182164  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:28.182116  143869 retry.go:31] will retry after 1.060263511s: waiting for domain to come up
	I1020 11:57:29.244483  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:29.244984  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:29.245009  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:29.245263  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:29.245287  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:29.245252  143869 retry.go:31] will retry after 1.653610595s: waiting for domain to come up
	I1020 11:57:30.901377  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:30.901858  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:30.901886  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:30.902132  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:30.902232  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:30.902154  143869 retry.go:31] will retry after 2.027079635s: waiting for domain to come up
	I1020 11:57:32.931337  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:32.931911  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:32.931950  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:32.932175  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:32.932199  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:32.932158  143869 retry.go:31] will retry after 2.44795771s: waiting for domain to come up
	I1020 11:57:35.383079  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:35.383655  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:35.383678  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:35.384114  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:35.384203  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:35.384096  143869 retry.go:31] will retry after 2.363899182s: waiting for domain to come up
	I1020 11:57:37.749490  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:37.749959  143841 main.go:141] libmachine: (addons-323619) DBG | no network interface addresses found for domain addons-323619 (source=lease)
	I1020 11:57:37.749990  143841 main.go:141] libmachine: (addons-323619) DBG | trying to list again with source=arp
	I1020 11:57:37.750260  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find current IP address of domain addons-323619 in network mk-addons-323619 (interfaces detected: [])
	I1020 11:57:37.750295  143841 main.go:141] libmachine: (addons-323619) DBG | I1020 11:57:37.750231  143869 retry.go:31] will retry after 4.440598135s: waiting for domain to come up
	I1020 11:57:42.192659  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.193194  143841 main.go:141] libmachine: (addons-323619) found domain IP: 192.168.39.233
	I1020 11:57:42.193226  143841 main.go:141] libmachine: (addons-323619) reserving static IP address...
	I1020 11:57:42.193240  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has current primary IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.193626  143841 main.go:141] libmachine: (addons-323619) DBG | unable to find host DHCP lease matching {name: "addons-323619", mac: "52:54:00:71:f8:e0", ip: "192.168.39.233"} in network mk-addons-323619
	I1020 11:57:42.429712  143841 main.go:141] libmachine: (addons-323619) reserved static IP address 192.168.39.233 for domain addons-323619
	I1020 11:57:42.429748  143841 main.go:141] libmachine: (addons-323619) waiting for SSH...
	I1020 11:57:42.429758  143841 main.go:141] libmachine: (addons-323619) DBG | Getting to WaitForSSH function...
	I1020 11:57:42.432798  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.433397  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:minikube Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:42.433467  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.433674  143841 main.go:141] libmachine: (addons-323619) DBG | Using SSH client type: external
	I1020 11:57:42.433705  143841 main.go:141] libmachine: (addons-323619) DBG | Using SSH private key: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa (-rw-------)
	I1020 11:57:42.433769  143841 main.go:141] libmachine: (addons-323619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1020 11:57:42.433785  143841 main.go:141] libmachine: (addons-323619) DBG | About to run SSH command:
	I1020 11:57:42.433801  143841 main.go:141] libmachine: (addons-323619) DBG | exit 0
	I1020 11:57:42.571195  143841 main.go:141] libmachine: (addons-323619) DBG | SSH cmd err, output: <nil>: 
	I1020 11:57:42.571507  143841 main.go:141] libmachine: (addons-323619) domain creation complete
	I1020 11:57:42.571879  143841 main.go:141] libmachine: (addons-323619) Calling .GetConfigRaw
	I1020 11:57:42.572516  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:42.572723  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:42.572902  143841 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1020 11:57:42.572923  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:57:42.574250  143841 main.go:141] libmachine: Detecting operating system of created instance...
	I1020 11:57:42.574286  143841 main.go:141] libmachine: Waiting for SSH to be available...
	I1020 11:57:42.574313  143841 main.go:141] libmachine: Getting to WaitForSSH function...
	I1020 11:57:42.574325  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:42.578009  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.578464  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:42.578493  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.578677  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:42.578860  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.579002  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.579155  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:42.579333  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:42.579574  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:42.579594  143841 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1020 11:57:42.690817  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 11:57:42.690839  143841 main.go:141] libmachine: Detecting the provisioner...
	I1020 11:57:42.690848  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:42.693975  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.694436  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:42.694481  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.694569  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:42.694784  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.694938  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.695049  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:42.695201  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:42.695494  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:42.695506  143841 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1020 11:57:42.806607  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1020 11:57:42.806701  143841 main.go:141] libmachine: found compatible host: buildroot
	I1020 11:57:42.806717  143841 main.go:141] libmachine: Provisioning with buildroot...
	I1020 11:57:42.806730  143841 main.go:141] libmachine: (addons-323619) Calling .GetMachineName
	I1020 11:57:42.807036  143841 buildroot.go:166] provisioning hostname "addons-323619"
	I1020 11:57:42.807067  143841 main.go:141] libmachine: (addons-323619) Calling .GetMachineName
	I1020 11:57:42.807265  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:42.810440  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.810845  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:42.810866  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.811138  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:42.811348  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.811535  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.811676  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:42.811872  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:42.812148  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:42.812163  143841 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-323619 && echo "addons-323619" | sudo tee /etc/hostname
	I1020 11:57:42.939529  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-323619
	
	I1020 11:57:42.939561  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:42.942785  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.943288  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:42.943315  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:42.943526  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:42.943748  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.943933  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:42.944054  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:42.944192  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:42.944527  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:42.944550  143841 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-323619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-323619/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-323619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 11:57:43.066293  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 11:57:43.066330  143841 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21773-139101/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-139101/.minikube}
	I1020 11:57:43.066357  143841 buildroot.go:174] setting up certificates
	I1020 11:57:43.066371  143841 provision.go:84] configureAuth start
	I1020 11:57:43.066383  143841 main.go:141] libmachine: (addons-323619) Calling .GetMachineName
	I1020 11:57:43.066690  143841 main.go:141] libmachine: (addons-323619) Calling .GetIP
	I1020 11:57:43.069452  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.069847  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.069877  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.069984  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.072279  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.072615  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.072638  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.072837  143841 provision.go:143] copyHostCerts
	I1020 11:57:43.072909  143841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem (1675 bytes)
	I1020 11:57:43.073039  143841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem (1082 bytes)
	I1020 11:57:43.073121  143841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem (1123 bytes)
	I1020 11:57:43.073193  143841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem org=jenkins.addons-323619 san=[127.0.0.1 192.168.39.233 addons-323619 localhost minikube]
	I1020 11:57:43.198244  143841 provision.go:177] copyRemoteCerts
	I1020 11:57:43.198343  143841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 11:57:43.198375  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.201345  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.201728  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.201760  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.201952  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.202163  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.202326  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.202501  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:57:43.289801  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 11:57:43.323418  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 11:57:43.353112  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 11:57:43.383832  143841 provision.go:87] duration metric: took 317.441394ms to configureAuth
	I1020 11:57:43.383868  143841 buildroot.go:189] setting minikube options for container-runtime
	I1020 11:57:43.384098  143841 config.go:182] Loaded profile config "addons-323619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:57:43.384241  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.387068  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.387516  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.387559  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.387792  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.388020  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.388216  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.388361  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.388542  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:43.388772  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:43.388792  143841 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 11:57:43.637219  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 11:57:43.637255  143841 main.go:141] libmachine: Checking connection to Docker...
	I1020 11:57:43.637280  143841 main.go:141] libmachine: (addons-323619) Calling .GetURL
	I1020 11:57:43.638544  143841 main.go:141] libmachine: (addons-323619) DBG | using libvirt version 8000000
	I1020 11:57:43.640775  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.641157  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.641198  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.641380  143841 main.go:141] libmachine: Docker is up and running!
	I1020 11:57:43.641396  143841 main.go:141] libmachine: Reticulating splines...
	I1020 11:57:43.641423  143841 client.go:171] duration metric: took 21.605963314s to LocalClient.Create
	I1020 11:57:43.641518  143841 start.go:167] duration metric: took 21.606056997s to libmachine.API.Create "addons-323619"
	I1020 11:57:43.641553  143841 start.go:293] postStartSetup for "addons-323619" (driver="kvm2")
	I1020 11:57:43.641567  143841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 11:57:43.641595  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:43.641874  143841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 11:57:43.641898  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.644688  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.645093  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.645122  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.645320  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.645551  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.645727  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.645870  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:57:43.732817  143841 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 11:57:43.737524  143841 info.go:137] Remote host: Buildroot 2025.02
	I1020 11:57:43.737553  143841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/addons for local assets ...
	I1020 11:57:43.737644  143841 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/files for local assets ...
	I1020 11:57:43.737677  143841 start.go:296] duration metric: took 96.115054ms for postStartSetup
	I1020 11:57:43.737723  143841 main.go:141] libmachine: (addons-323619) Calling .GetConfigRaw
	I1020 11:57:43.738345  143841 main.go:141] libmachine: (addons-323619) Calling .GetIP
	I1020 11:57:43.741250  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.741685  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.741729  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.741972  143841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/config.json ...
	I1020 11:57:43.742182  143841 start.go:128] duration metric: took 21.723599501s to createHost
	I1020 11:57:43.742213  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.744565  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.744951  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.744981  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.745155  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.745347  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.745539  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.745684  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.745835  143841 main.go:141] libmachine: Using SSH client type: native
	I1020 11:57:43.746041  143841 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I1020 11:57:43.746051  143841 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1020 11:57:43.862010  143841 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760961463.838423783
	
	I1020 11:57:43.862041  143841 fix.go:216] guest clock: 1760961463.838423783
	I1020 11:57:43.862052  143841 fix.go:229] Guest: 2025-10-20 11:57:43.838423783 +0000 UTC Remote: 2025-10-20 11:57:43.742198679 +0000 UTC m=+21.842753529 (delta=96.225104ms)
	I1020 11:57:43.862080  143841 fix.go:200] guest clock delta is within tolerance: 96.225104ms
	I1020 11:57:43.862086  143841 start.go:83] releasing machines lock for "addons-323619", held for 21.843581261s
	I1020 11:57:43.862110  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:43.862437  143841 main.go:141] libmachine: (addons-323619) Calling .GetIP
	I1020 11:57:43.865500  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.865908  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.865934  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.866102  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:43.866685  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:43.866925  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:57:43.867092  143841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 11:57:43.867115  143841 ssh_runner.go:195] Run: cat /version.json
	I1020 11:57:43.867135  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.867143  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:57:43.870198  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.870302  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.870647  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.870677  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.870701  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:43.870806  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:43.870857  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.871060  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.871123  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:57:43.871315  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:57:43.871318  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.871525  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:57:43.871592  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:57:43.871692  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:57:43.977438  143841 ssh_runner.go:195] Run: systemctl --version
	I1020 11:57:43.983633  143841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 11:57:44.139322  143841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 11:57:44.147138  143841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 11:57:44.147224  143841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 11:57:44.168331  143841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 11:57:44.168359  143841 start.go:495] detecting cgroup driver to use...
	I1020 11:57:44.168468  143841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 11:57:44.193077  143841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 11:57:44.211307  143841 docker.go:218] disabling cri-docker service (if available) ...
	I1020 11:57:44.211372  143841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 11:57:44.228465  143841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 11:57:44.244114  143841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 11:57:44.386725  143841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 11:57:44.588231  143841 docker.go:234] disabling docker service ...
	I1020 11:57:44.588315  143841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 11:57:44.604777  143841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 11:57:44.621211  143841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 11:57:44.772989  143841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 11:57:44.910666  143841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 11:57:44.926581  143841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 11:57:44.948092  143841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 11:57:44.948164  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:44.961068  143841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 11:57:44.961184  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:44.974377  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:44.986299  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:44.998383  143841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 11:57:45.010949  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:45.023076  143841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:45.042947  143841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 11:57:45.058185  143841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 11:57:45.071080  143841 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 11:57:45.071143  143841 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1020 11:57:45.094048  143841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 11:57:45.106182  143841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:57:45.242872  143841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 11:57:45.346294  143841 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 11:57:45.346418  143841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 11:57:45.352394  143841 start.go:563] Will wait 60s for crictl version
	I1020 11:57:45.352496  143841 ssh_runner.go:195] Run: which crictl
	I1020 11:57:45.356846  143841 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1020 11:57:45.402740  143841 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1020 11:57:45.402883  143841 ssh_runner.go:195] Run: crio --version
	I1020 11:57:45.436182  143841 ssh_runner.go:195] Run: crio --version
	I1020 11:57:45.471417  143841 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1020 11:57:45.472619  143841 main.go:141] libmachine: (addons-323619) Calling .GetIP
	I1020 11:57:45.475737  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:45.476105  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:57:45.476128  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:57:45.476443  143841 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1020 11:57:45.480849  143841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:57:45.495497  143841 kubeadm.go:883] updating cluster {Name:addons-323619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-323619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 11:57:45.495638  143841 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:57:45.495687  143841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:57:45.528441  143841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1020 11:57:45.528538  143841 ssh_runner.go:195] Run: which lz4
	I1020 11:57:45.532949  143841 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1020 11:57:45.537933  143841 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1020 11:57:45.537972  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1020 11:57:46.921310  143841 crio.go:462] duration metric: took 1.388404533s to copy over tarball
	I1020 11:57:46.921411  143841 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1020 11:57:48.451479  143841 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.530031179s)
	I1020 11:57:48.451516  143841 crio.go:469] duration metric: took 1.530167767s to extract the tarball
	I1020 11:57:48.451527  143841 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1020 11:57:48.492303  143841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 11:57:48.535242  143841 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 11:57:48.535268  143841 cache_images.go:85] Images are preloaded, skipping loading
	I1020 11:57:48.535277  143841 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.34.1 crio true true} ...
	I1020 11:57:48.535429  143841 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-323619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-323619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 11:57:48.535501  143841 ssh_runner.go:195] Run: crio config
	I1020 11:57:48.578875  143841 cni.go:84] Creating CNI manager for ""
	I1020 11:57:48.578901  143841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 11:57:48.578918  143841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 11:57:48.578940  143841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-323619 NodeName:addons-323619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 11:57:48.579090  143841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-323619"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.233"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 11:57:48.579157  143841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 11:57:48.590531  143841 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 11:57:48.590593  143841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 11:57:48.601284  143841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1020 11:57:48.619934  143841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 11:57:48.638366  143841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1020 11:57:48.657127  143841 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I1020 11:57:48.661232  143841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 11:57:48.675011  143841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:57:48.812732  143841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:57:48.831168  143841 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619 for IP: 192.168.39.233
	I1020 11:57:48.831188  143841 certs.go:195] generating shared ca certs ...
	I1020 11:57:48.831204  143841 certs.go:227] acquiring lock for ca certs: {Name:mk4d0d22cc1ac40184675be8ad2f5fa8f1c0ffc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:48.831341  143841 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key
	I1020 11:57:49.360714  143841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt ...
	I1020 11:57:49.360744  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt: {Name:mkaa5d33c38e914d0315bb92a60d4f24c480ec3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.360927  143841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key ...
	I1020 11:57:49.360939  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key: {Name:mkaa008ccbb4c7eedf712e5e10c43c0149d4180d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.361013  143841 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key
	I1020 11:57:49.693368  143841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt ...
	I1020 11:57:49.693404  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt: {Name:mk3d88ee260dbf7b5fd122523c072df5d76ad4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.693576  143841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key ...
	I1020 11:57:49.693589  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key: {Name:mkf9539c5ea0086c642b8794980ba3b9b226d435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.693660  143841 certs.go:257] generating profile certs ...
	I1020 11:57:49.693722  143841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.key
	I1020 11:57:49.693747  143841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt with IP's: []
	I1020 11:57:49.906748  143841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt ...
	I1020 11:57:49.906778  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: {Name:mk31e9e8ba0568a2ff8bd9afd490e2aa12ba3a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.906954  143841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.key ...
	I1020 11:57:49.906966  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.key: {Name:mk86f3ec178072581ee992444ce5571dad0ffb3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.907038  143841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key.d22ddc53
	I1020 11:57:49.907057  143841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt.d22ddc53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.233]
	I1020 11:57:49.998026  143841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt.d22ddc53 ...
	I1020 11:57:49.998054  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt.d22ddc53: {Name:mk45439ec6e02e01e4cefb211dd327a3fe908dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.998813  143841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key.d22ddc53 ...
	I1020 11:57:49.998831  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key.d22ddc53: {Name:mk785d10f66b35d8a3ecc98e6515bfd27676de81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:49.998913  143841 certs.go:382] copying /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt.d22ddc53 -> /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt
	I1020 11:57:49.999012  143841 certs.go:386] copying /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key.d22ddc53 -> /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key
	I1020 11:57:49.999070  143841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.key
	I1020 11:57:49.999091  143841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.crt with IP's: []
	I1020 11:57:50.344844  143841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.crt ...
	I1020 11:57:50.344873  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.crt: {Name:mkf5b3ceb518381fbb34675ac86ec81a603af13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:50.345608  143841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.key ...
	I1020 11:57:50.345628  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.key: {Name:mk3ad3704071c5bc6f38e34c3d1d24bc0ead379d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:57:50.345807  143841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 11:57:50.345844  143841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem (1082 bytes)
	I1020 11:57:50.345868  143841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem (1123 bytes)
	I1020 11:57:50.345889  143841 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem (1675 bytes)
	I1020 11:57:50.346606  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 11:57:50.376035  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1020 11:57:50.407079  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 11:57:50.440878  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 11:57:50.476609  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 11:57:50.504372  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 11:57:50.531296  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 11:57:50.557434  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 11:57:50.583316  143841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 11:57:50.609076  143841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 11:57:50.627166  143841 ssh_runner.go:195] Run: openssl version
	I1020 11:57:50.633074  143841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 11:57:50.645051  143841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:57:50.649764  143841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:57:50.649815  143841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 11:57:50.656688  143841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 11:57:50.668383  143841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 11:57:50.672792  143841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 11:57:50.672841  143841 kubeadm.go:400] StartCluster: {Name:addons-323619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-323619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:57:50.672921  143841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 11:57:50.672963  143841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 11:57:50.710035  143841 cri.go:89] found id: ""
	I1020 11:57:50.710120  143841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 11:57:50.721205  143841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 11:57:50.732160  143841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 11:57:50.742588  143841 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 11:57:50.742612  143841 kubeadm.go:157] found existing configuration files:
	
	I1020 11:57:50.742658  143841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 11:57:50.752176  143841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 11:57:50.752237  143841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 11:57:50.762657  143841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 11:57:50.772512  143841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 11:57:50.772558  143841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 11:57:50.782912  143841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 11:57:50.792521  143841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 11:57:50.792562  143841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 11:57:50.802809  143841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 11:57:50.812336  143841 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 11:57:50.812396  143841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 11:57:50.823852  143841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1020 11:57:50.960932  143841 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 11:58:01.394568  143841 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 11:58:01.394669  143841 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 11:58:01.394774  143841 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 11:58:01.394924  143841 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 11:58:01.395068  143841 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 11:58:01.395192  143841 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 11:58:01.397829  143841 out.go:252]   - Generating certificates and keys ...
	I1020 11:58:01.397938  143841 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 11:58:01.398051  143841 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 11:58:01.398171  143841 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 11:58:01.398248  143841 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 11:58:01.398331  143841 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 11:58:01.398432  143841 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 11:58:01.398514  143841 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 11:58:01.398653  143841 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-323619 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	I1020 11:58:01.398747  143841 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 11:58:01.398919  143841 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-323619 localhost] and IPs [192.168.39.233 127.0.0.1 ::1]
	I1020 11:58:01.399031  143841 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 11:58:01.399125  143841 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 11:58:01.399213  143841 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 11:58:01.399298  143841 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 11:58:01.399372  143841 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 11:58:01.399474  143841 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 11:58:01.399538  143841 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 11:58:01.399591  143841 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 11:58:01.399639  143841 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 11:58:01.399718  143841 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 11:58:01.399803  143841 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 11:58:01.400914  143841 out.go:252]   - Booting up control plane ...
	I1020 11:58:01.401040  143841 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 11:58:01.401151  143841 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 11:58:01.401211  143841 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 11:58:01.401303  143841 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 11:58:01.401393  143841 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 11:58:01.401522  143841 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 11:58:01.401614  143841 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 11:58:01.401664  143841 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 11:58:01.401770  143841 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 11:58:01.401858  143841 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 11:58:01.401905  143841 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.341676ms
	I1020 11:58:01.401977  143841 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 11:58:01.402046  143841 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.233:8443/livez
	I1020 11:58:01.402132  143841 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 11:58:01.402202  143841 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 11:58:01.402271  143841 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.940747122s
	I1020 11:58:01.402328  143841 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.516440549s
	I1020 11:58:01.402390  143841 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501728414s
	I1020 11:58:01.402505  143841 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 11:58:01.402627  143841 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 11:58:01.402687  143841 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 11:58:01.402839  143841 kubeadm.go:318] [mark-control-plane] Marking the node addons-323619 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 11:58:01.402887  143841 kubeadm.go:318] [bootstrap-token] Using token: h150zb.rdn9qsyp86vnllq9
	I1020 11:58:01.404658  143841 out.go:252]   - Configuring RBAC rules ...
	I1020 11:58:01.404801  143841 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 11:58:01.404917  143841 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 11:58:01.405101  143841 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 11:58:01.405290  143841 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 11:58:01.405463  143841 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 11:58:01.405544  143841 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 11:58:01.405648  143841 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 11:58:01.405692  143841 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 11:58:01.405738  143841 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 11:58:01.405744  143841 kubeadm.go:318] 
	I1020 11:58:01.405798  143841 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 11:58:01.405804  143841 kubeadm.go:318] 
	I1020 11:58:01.405867  143841 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 11:58:01.405872  143841 kubeadm.go:318] 
	I1020 11:58:01.405893  143841 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 11:58:01.405946  143841 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 11:58:01.405991  143841 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 11:58:01.405997  143841 kubeadm.go:318] 
	I1020 11:58:01.406041  143841 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 11:58:01.406046  143841 kubeadm.go:318] 
	I1020 11:58:01.406085  143841 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 11:58:01.406090  143841 kubeadm.go:318] 
	I1020 11:58:01.406138  143841 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 11:58:01.406204  143841 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 11:58:01.406273  143841 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 11:58:01.406283  143841 kubeadm.go:318] 
	I1020 11:58:01.406357  143841 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 11:58:01.406440  143841 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 11:58:01.406448  143841 kubeadm.go:318] 
	I1020 11:58:01.406527  143841 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h150zb.rdn9qsyp86vnllq9 \
	I1020 11:58:01.406622  143841 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:583a374fe669662bbcc38d1f0a7b704825d21b01d9c48ce23df3aa73e645937e \
	I1020 11:58:01.406645  143841 kubeadm.go:318] 	--control-plane 
	I1020 11:58:01.406654  143841 kubeadm.go:318] 
	I1020 11:58:01.406731  143841 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 11:58:01.406737  143841 kubeadm.go:318] 
	I1020 11:58:01.406817  143841 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h150zb.rdn9qsyp86vnllq9 \
	I1020 11:58:01.406930  143841 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:583a374fe669662bbcc38d1f0a7b704825d21b01d9c48ce23df3aa73e645937e 
	I1020 11:58:01.406954  143841 cni.go:84] Creating CNI manager for ""
	I1020 11:58:01.406966  143841 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 11:58:01.408281  143841 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1020 11:58:01.409346  143841 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1020 11:58:01.421939  143841 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1020 11:58:01.443681  143841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 11:58:01.443790  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:01.443789  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-323619 minikube.k8s.io/updated_at=2025_10_20T11_58_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=addons-323619 minikube.k8s.io/primary=true
	I1020 11:58:01.489466  143841 ops.go:34] apiserver oom_adj: -16
	I1020 11:58:01.597128  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:02.098102  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:02.597197  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:03.097518  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:03.597585  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:04.097370  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:04.597497  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:05.097542  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:05.598125  143841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 11:58:05.678829  143841 kubeadm.go:1113] duration metric: took 4.235124516s to wait for elevateKubeSystemPrivileges
	I1020 11:58:05.678897  143841 kubeadm.go:402] duration metric: took 15.006059721s to StartCluster
	I1020 11:58:05.678925  143841 settings.go:142] acquiring lock: {Name:mka845ade6dad629b08aff076fd014e4b2afad9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:58:05.679070  143841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 11:58:05.679528  143841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/kubeconfig: {Name:mkf6907ead759546580f2340b9e9b6432a1cd822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 11:58:05.680357  143841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 11:58:05.680432  143841 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 11:58:05.680530  143841 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1020 11:58:05.680678  143841 config.go:182] Loaded profile config "addons-323619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:05.680697  143841 addons.go:69] Setting metrics-server=true in profile "addons-323619"
	I1020 11:58:05.680697  143841 addons.go:69] Setting gcp-auth=true in profile "addons-323619"
	I1020 11:58:05.680719  143841 addons.go:238] Setting addon metrics-server=true in "addons-323619"
	I1020 11:58:05.680728  143841 addons.go:69] Setting ingress=true in profile "addons-323619"
	I1020 11:58:05.680738  143841 addons.go:238] Setting addon ingress=true in "addons-323619"
	I1020 11:58:05.680683  143841 addons.go:69] Setting yakd=true in profile "addons-323619"
	I1020 11:58:05.680757  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680751  143841 addons.go:69] Setting default-storageclass=true in profile "addons-323619"
	I1020 11:58:05.680771  143841 addons.go:238] Setting addon yakd=true in "addons-323619"
	I1020 11:58:05.680781  143841 addons.go:69] Setting ingress-dns=true in profile "addons-323619"
	I1020 11:58:05.680779  143841 addons.go:69] Setting storage-provisioner=true in profile "addons-323619"
	I1020 11:58:05.680801  143841 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-323619"
	I1020 11:58:05.680803  143841 addons.go:238] Setting addon ingress-dns=true in "addons-323619"
	I1020 11:58:05.680806  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680814  143841 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-323619"
	I1020 11:58:05.680822  143841 addons.go:69] Setting registry=true in profile "addons-323619"
	I1020 11:58:05.680820  143841 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-323619"
	I1020 11:58:05.680827  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680832  143841 addons.go:238] Setting addon registry=true in "addons-323619"
	I1020 11:58:05.680851  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680857  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680870  143841 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-323619"
	I1020 11:58:05.680898  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680774  143841 addons.go:69] Setting cloud-spanner=true in profile "addons-323619"
	I1020 11:58:05.680943  143841 addons.go:238] Setting addon cloud-spanner=true in "addons-323619"
	I1020 11:58:05.680959  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.681018  143841 addons.go:69] Setting registry-creds=true in profile "addons-323619"
	I1020 11:58:05.681033  143841 addons.go:238] Setting addon registry-creds=true in "addons-323619"
	I1020 11:58:05.681057  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680772  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.680813  143841 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-323619"
	I1020 11:58:05.681377  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681393  143841 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-323619"
	I1020 11:58:05.681419  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681419  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681428  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.681433  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.681439  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681446  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681455  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.681467  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.681469  143841 addons.go:69] Setting volumesnapshots=true in profile "addons-323619"
	I1020 11:58:05.681474  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.681481  143841 addons.go:238] Setting addon volumesnapshots=true in "addons-323619"
	I1020 11:58:05.681502  143841 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-323619"
	I1020 11:58:05.681512  143841 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-323619"
	I1020 11:58:05.681523  143841 addons.go:69] Setting volcano=true in profile "addons-323619"
	I1020 11:58:05.681530  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.680802  143841 addons.go:238] Setting addon storage-provisioner=true in "addons-323619"
	I1020 11:58:05.681535  143841 addons.go:238] Setting addon volcano=true in "addons-323619"
	I1020 11:58:05.681534  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.681553  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.680721  143841 mustload.go:65] Loading cluster: addons-323619
	I1020 11:58:05.680799  143841 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-323619"
	I1020 11:58:05.680684  143841 addons.go:69] Setting inspektor-gadget=true in profile "addons-323619"
	I1020 11:58:05.681631  143841 addons.go:238] Setting addon inspektor-gadget=true in "addons-323619"
	I1020 11:58:05.681760  143841 config.go:182] Loaded profile config "addons-323619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 11:58:05.681421  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.681847  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.682026  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.682084  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682113  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.682195  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682227  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.682315  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.682446  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682479  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.682524  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.682745  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682812  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.682834  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682928  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.682943  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.683031  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.683071  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.683102  143841 out.go:179] * Verifying Kubernetes components...
	I1020 11:58:05.682907  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.682897  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.683313  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.683346  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.684825  143841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 11:58:05.697696  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.697763  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.697696  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.697976  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.701494  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I1020 11:58:05.708589  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I1020 11:58:05.710538  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.710704  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.711286  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.711306  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.711778  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.712010  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.714525  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I1020 11:58:05.714964  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.715551  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.715572  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.716021  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.716704  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.716796  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.717133  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.717607  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.717647  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.718740  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I1020 11:58:05.719915  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.719935  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.722475  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.722681  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1020 11:58:05.723207  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.723447  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.723815  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.723822  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I1020 11:58:05.724106  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I1020 11:58:05.724358  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.724380  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.724867  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.724909  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.724985  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42911
	I1020 11:58:05.725493  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.725512  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.726022  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.726220  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.726233  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.726669  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.726683  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.726754  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.727368  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.727514  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.727460  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.727851  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.728352  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.728387  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.728924  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.729693  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.729724  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.731982  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.732052  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.732693  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.733385  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.733428  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.736514  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1020 11:58:05.736727  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.737431  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.737471  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.738117  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.738727  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.738746  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.739151  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.739206  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I1020 11:58:05.739782  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.739826  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.741841  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1020 11:58:05.742437  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.743029  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.743045  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.743477  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.745740  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I1020 11:58:05.746435  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.746676  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.746993  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.747212  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.747338  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.747295  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I1020 11:58:05.747981  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.748178  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.749139  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.749170  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I1020 11:58:05.749979  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.750018  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.750356  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.750618  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.750728  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I1020 11:58:05.751210  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.751748  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.751770  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.751966  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I1020 11:58:05.752166  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.752328  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.753058  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.753077  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.753149  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.753503  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.753808  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.753870  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.754103  143841 addons.go:238] Setting addon default-storageclass=true in "addons-323619"
	I1020 11:58:05.754164  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.754252  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.754268  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.754553  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.754601  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.754828  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.755499  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1020 11:58:05.756118  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.756667  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.756691  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.757112  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.760354  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I1020 11:58:05.763705  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I1020 11:58:05.764350  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.764975  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.765023  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.765229  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.765247  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.765587  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.765631  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.765916  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.766335  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.766822  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.766836  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.766861  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.766950  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.767894  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.768009  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.770637  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.770679  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.770951  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1020 11:58:05.771065  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I1020 11:58:05.771248  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.771274  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.773682  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.773785  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.773850  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I1020 11:58:05.774002  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.774095  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.774099  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I1020 11:58:05.774156  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.774532  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I1020 11:58:05.774764  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.774777  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.775229  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.775497  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.776105  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.776862  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.776877  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.776908  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.776930  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.777323  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.777370  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.777582  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.778242  143841 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-323619"
	I1020 11:58:05.778284  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:05.778311  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.778730  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.778763  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.778769  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.779252  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.779653  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.780009  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.780023  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.780450  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.781088  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.781189  143841 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1020 11:58:05.781548  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.782389  143841 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 11:58:05.782419  143841 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1020 11:58:05.782440  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.781993  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.782856  143841 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1020 11:58:05.783186  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.783218  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.783670  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.784042  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1020 11:58:05.784343  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.784346  143841 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1020 11:58:05.784688  143841 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1020 11:58:05.784707  143841 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1020 11:58:05.784727  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.784991  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.785798  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.785817  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.786003  143841 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1020 11:58:05.786017  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1020 11:58:05.786035  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.786388  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.787080  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.789752  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1020 11:58:05.790736  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.791555  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.791609  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.792039  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.792112  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.792175  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.792893  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.793942  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.794282  143841 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1020 11:58:05.795445  143841 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1020 11:58:05.795654  143841 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:58:05.795670  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1020 11:58:05.795691  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.795783  143841 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 11:58:05.796469  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.796793  143841 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:58:05.796806  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1020 11:58:05.796824  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.796975  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.797662  143841 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:58:05.797722  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 11:58:05.797746  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.798693  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.798862  143841 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1020 11:58:05.799988  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.800011  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.800029  143841 out.go:179]   - Using image docker.io/registry:3.0.0
	I1020 11:58:05.801106  143841 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1020 11:58:05.801549  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1020 11:58:05.801849  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.802114  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.802326  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.802506  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.802683  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.804656  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.804757  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.805554  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.805753  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.805997  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I1020 11:58:05.806235  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I1020 11:58:05.806551  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.806758  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.807265  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.807456  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.807592  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.807640  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1020 11:58:05.807916  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.807945  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.808367  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.808640  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I1020 11:58:05.808887  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.808898  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.808913  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.808917  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.809360  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.809395  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.809422  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.809939  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.810011  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.810072  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.810081  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.810273  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.810673  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.810845  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.810892  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.811097  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.811169  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.811185  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.811838  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.811953  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.812008  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.812201  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.812227  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.812353  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.812376  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.812441  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.812578  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.812678  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.812716  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.812853  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.813066  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.813085  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.813149  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.813570  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.813654  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.813676  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.813732  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.813803  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.813897  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.813916  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.814059  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.814248  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.814323  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.814721  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.814721  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.814923  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.814981  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.815142  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.815189  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.815258  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.815528  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.815849  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.817647  143841 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1020 11:58:05.817646  143841 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1020 11:58:05.817788  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.818916  143841 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:58:05.818934  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1020 11:58:05.818953  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.819040  143841 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:58:05.819053  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1020 11:58:05.819071  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.819138  143841 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1020 11:58:05.819444  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1020 11:58:05.819595  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I1020 11:58:05.819888  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.820463  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.820608  143841 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1020 11:58:05.820624  143841 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1020 11:58:05.820653  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.820697  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.820729  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.820733  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.820888  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.820899  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.821376  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.821745  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.822090  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.822317  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1020 11:58:05.822426  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.823468  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I1020 11:58:05.823660  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1020 11:58:05.823684  143841 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1020 11:58:05.823704  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.824107  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.824933  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.825021  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.825735  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.826962  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:05.827110  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:05.827814  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.827857  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I1020 11:58:05.828086  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.828125  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:05.828144  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:05.828437  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:05.828680  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.828732  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:05.828750  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:05.828773  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:05.828781  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:05.829029  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:05.829039  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	W1020 11:58:05.829250  143841 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1020 11:58:05.829763  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1020 11:58:05.830030  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.830118  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.830143  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.830299  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.830440  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.830807  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.830828  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.830954  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.830974  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.831336  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.831553  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.831617  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.831660  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.831785  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.831993  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.832119  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.832228  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.832231  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.832435  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1020 11:58:05.832563  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.832567  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.832644  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.832690  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.832885  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.832956  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.833115  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.833501  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.833768  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.833905  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.834138  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.834361  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.834550  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.834676  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1020 11:58:05.834711  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.834751  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.836447  143841 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1020 11:58:05.836526  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1020 11:58:05.837761  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1020 11:58:05.837794  143841 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:58:05.837882  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I1020 11:58:05.838310  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.838747  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.838766  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.839218  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.839460  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.840086  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1020 11:58:05.840126  143841 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:58:05.841472  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.841676  143841 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 11:58:05.841692  143841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 11:58:05.841710  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.842239  143841 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:58:05.842263  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1020 11:58:05.842289  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.842763  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1020 11:58:05.843914  143841 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1020 11:58:05.844961  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1020 11:58:05.844981  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1020 11:58:05.845002  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.848800  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.849309  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.849348  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.849364  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.849676  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.849678  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.849905  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.849917  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1020 11:58:05.849972  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.849986  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.850159  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.850177  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.850201  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.850223  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.850367  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.850414  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.850503  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.850551  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:05.850630  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.850733  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.850891  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.850899  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.851026  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:05.851021  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:05.851041  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:05.851506  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:05.851765  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:05.853601  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:05.855361  143841 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1020 11:58:05.856498  143841 out.go:179]   - Using image docker.io/busybox:stable
	I1020 11:58:05.857588  143841 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:58:05.857603  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1020 11:58:05.857617  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:05.860981  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.861461  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:05.861491  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:05.861712  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:05.861904  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:05.862044  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:05.862206  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:06.350814  143841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 11:58:06.350848  143841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 11:58:06.440795  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1020 11:58:06.460361  143841 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:06.460383  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1020 11:58:06.468928  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 11:58:06.538547  143841 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1020 11:58:06.538581  143841 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1020 11:58:06.608269  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 11:58:06.651053  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 11:58:06.702928  143841 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 11:58:06.702952  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1020 11:58:06.705197  143841 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1020 11:58:06.705215  143841 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1020 11:58:06.740354  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 11:58:06.747505  143841 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1020 11:58:06.747527  143841 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1020 11:58:06.752691  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 11:58:06.825127  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 11:58:06.835175  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1020 11:58:06.835200  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1020 11:58:06.921646  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 11:58:06.957120  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:07.002029  143841 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:58:07.002070  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1020 11:58:07.082153  143841 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1020 11:58:07.082184  143841 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1020 11:58:07.104864  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 11:58:07.136916  143841 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1020 11:58:07.136945  143841 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1020 11:58:07.213366  143841 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 11:58:07.213410  143841 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1020 11:58:07.455435  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1020 11:58:07.455460  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1020 11:58:07.605698  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1020 11:58:07.656129  143841 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1020 11:58:07.656157  143841 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1020 11:58:07.681639  143841 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1020 11:58:07.681666  143841 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1020 11:58:07.742630  143841 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:58:07.742660  143841 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1020 11:58:07.749548  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1020 11:58:07.749583  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1020 11:58:08.092883  143841 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:58:08.092913  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1020 11:58:08.093358  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1020 11:58:08.093386  143841 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1020 11:58:08.202126  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1020 11:58:08.202164  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1020 11:58:08.202246  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 11:58:08.373094  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1020 11:58:08.499003  143841 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:58:08.499037  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1020 11:58:08.589758  143841 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1020 11:58:08.589797  143841 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1020 11:58:08.791865  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:58:08.850897  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1020 11:58:08.850919  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1020 11:58:09.268360  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1020 11:58:09.268418  143841 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1020 11:58:09.330637  143841 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.979745446s)
	I1020 11:58:09.330684  143841 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1020 11:58:09.330714  143841 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.979859678s)
	I1020 11:58:09.331725  143841 node_ready.go:35] waiting up to 6m0s for node "addons-323619" to be "Ready" ...
	I1020 11:58:09.336253  143841 node_ready.go:49] node "addons-323619" is "Ready"
	I1020 11:58:09.336287  143841 node_ready.go:38] duration metric: took 4.518623ms for node "addons-323619" to be "Ready" ...
	I1020 11:58:09.336315  143841 api_server.go:52] waiting for apiserver process to appear ...
	I1020 11:58:09.336371  143841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 11:58:09.621371  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1020 11:58:09.621422  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1020 11:58:09.856063  143841 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-323619" context rescaled to 1 replicas
	I1020 11:58:09.897897  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.457060156s)
	I1020 11:58:09.897970  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:09.897986  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:09.898368  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:09.898389  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:09.898410  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:09.898419  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:09.898829  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:09.898851  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:09.898858  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:10.036383  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1020 11:58:10.036418  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1020 11:58:10.399661  143841 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:58:10.399683  143841 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1020 11:58:10.613309  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 11:58:11.425882  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.817572534s)
	I1020 11:58:11.425960  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.425976  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.425963  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.774870744s)
	I1020 11:58:11.425988  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.685606066s)
	I1020 11:58:11.426025  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426046  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426073  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.673346982s)
	I1020 11:58:11.426103  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426118  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426139  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.600967143s)
	I1020 11:58:11.426168  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426188  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426117  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426271  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426337  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.426345  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.426367  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426374  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426347  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.426415  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.426427  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.426428  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.426437  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426441  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.426440  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.426450  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426445  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.426790  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.426856  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.426898  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.426922  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.426949  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.426963  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.426924  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.427249  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.427314  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.427334  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.427353  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.427374  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.427730  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.957863579s)
	I1020 11:58:11.427769  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.427782  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.427855  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.427863  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.427871  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.427878  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.428539  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.428555  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.428563  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.428570  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.428621  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.428638  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.428644  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.428717  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.428730  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.428857  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.428869  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.429230  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.429248  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:11.532541  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:11.532565  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:11.532941  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:11.532994  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:11.533009  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:12.087459  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.16577197s)
	I1020 11:58:12.087509  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:12.087518  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:12.087800  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:12.087846  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:12.087858  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:12.087866  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:12.088092  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:12.088114  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:12.088143  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:12.216954  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:12.216973  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:12.217306  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:12.217328  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:12.309747  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.352584182s)
	W1020 11:58:12.309804  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:12.309836  143841 retry.go:31] will retry after 218.57283ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:12.529496  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:13.182450  143841 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1020 11:58:13.182495  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:13.186609  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:13.187057  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:13.187088  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:13.187293  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:13.187514  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:13.187664  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:13.187850  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:13.470096  143841 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1020 11:58:13.534664  143841 addons.go:238] Setting addon gcp-auth=true in "addons-323619"
	I1020 11:58:13.534738  143841 host.go:66] Checking if "addons-323619" exists ...
	I1020 11:58:13.535056  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:13.535095  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:13.548749  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1020 11:58:13.549233  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:13.549680  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:13.549706  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:13.550025  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:13.550511  143841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 11:58:13.550540  143841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 11:58:13.564246  143841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I1020 11:58:13.564850  143841 main.go:141] libmachine: () Calling .GetVersion
	I1020 11:58:13.565363  143841 main.go:141] libmachine: Using API Version  1
	I1020 11:58:13.565388  143841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 11:58:13.565946  143841 main.go:141] libmachine: () Calling .GetMachineName
	I1020 11:58:13.566190  143841 main.go:141] libmachine: (addons-323619) Calling .GetState
	I1020 11:58:13.568280  143841 main.go:141] libmachine: (addons-323619) Calling .DriverName
	I1020 11:58:13.568544  143841 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1020 11:58:13.568567  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHHostname
	I1020 11:58:13.571853  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:13.572376  143841 main.go:141] libmachine: (addons-323619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f8:e0", ip: ""} in network mk-addons-323619: {Iface:virbr1 ExpiryTime:2025-10-20 12:57:37 +0000 UTC Type:0 Mac:52:54:00:71:f8:e0 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:addons-323619 Clientid:01:52:54:00:71:f8:e0}
	I1020 11:58:13.572432  143841 main.go:141] libmachine: (addons-323619) DBG | domain addons-323619 has defined IP address 192.168.39.233 and MAC address 52:54:00:71:f8:e0 in network mk-addons-323619
	I1020 11:58:13.572564  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHPort
	I1020 11:58:13.572744  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHKeyPath
	I1020 11:58:13.572916  143841 main.go:141] libmachine: (addons-323619) Calling .GetSSHUsername
	I1020 11:58:13.573045  143841 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/addons-323619/id_rsa Username:docker}
	I1020 11:58:14.273042  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.168120815s)
	I1020 11:58:14.273116  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273130  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273063  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.667322627s)
	I1020 11:58:14.273150  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.070861335s)
	I1020 11:58:14.273192  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273244  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273263  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273259  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.90012928s)
	I1020 11:58:14.273310  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273321  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273334  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273354  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.481460113s)
	I1020 11:58:14.273418  143841 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.937016929s)
	W1020 11:58:14.273427  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:58:14.273438  143841 api_server.go:72] duration metric: took 8.592972191s to wait for apiserver process to appear ...
	I1020 11:58:14.273445  143841 api_server.go:88] waiting for apiserver healthz status ...
	I1020 11:58:14.273449  143841 retry.go:31] will retry after 163.514212ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 11:58:14.273465  143841 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I1020 11:58:14.273570  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.273590  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.273590  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.273602  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273603  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.273610  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273614  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273624  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.273680  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.273699  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.273705  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.273718  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.273724  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.274037  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.274071  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.274091  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.274094  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.274098  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.274103  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.274106  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:14.274113  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:14.274114  143841 addons.go:479] Verifying addon metrics-server=true in "addons-323619"
	I1020 11:58:14.274220  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.274253  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.274262  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.274269  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.274278  143841 addons.go:479] Verifying addon ingress=true in "addons-323619"
	I1020 11:58:14.276502  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:14.276524  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.276536  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.276540  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:14.276549  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:14.276559  143841 addons.go:479] Verifying addon registry=true in "addons-323619"
	I1020 11:58:14.277703  143841 out.go:179] * Verifying ingress addon...
	I1020 11:58:14.278560  143841 out.go:179] * Verifying registry addon...
	I1020 11:58:14.278577  143841 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-323619 service yakd-dashboard -n yakd-dashboard
	
	I1020 11:58:14.280307  143841 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1020 11:58:14.281025  143841 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1020 11:58:14.318108  143841 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I1020 11:58:14.334025  143841 api_server.go:141] control plane version: v1.34.1
	I1020 11:58:14.334059  143841 api_server.go:131] duration metric: took 60.606053ms to wait for apiserver health ...
	I1020 11:58:14.334072  143841 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 11:58:14.390956  143841 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 11:58:14.390984  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:14.391011  143841 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 11:58:14.391029  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:14.426108  143841 system_pods.go:59] 17 kube-system pods found
	I1020 11:58:14.426158  143841 system_pods.go:61] "amd-gpu-device-plugin-6vxgv" [fc997290-0438-4790-bb90-fa014005eff8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:58:14.426171  143841 system_pods.go:61] "coredns-66bc5c9577-p84sc" [f8a58dc9-04b9-4a1c-9210-db7bc4d9d8b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:58:14.426184  143841 system_pods.go:61] "coredns-66bc5c9577-xxnb6" [387de95e-0fd5-462c-a8b8-ee5618f6d0bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:58:14.426193  143841 system_pods.go:61] "etcd-addons-323619" [c5acf381-856c-47f9-af72-af2ed78073ba] Running
	I1020 11:58:14.426200  143841 system_pods.go:61] "kube-apiserver-addons-323619" [6db284d6-7b41-477b-bc2b-8e52619d4a2f] Running
	I1020 11:58:14.426208  143841 system_pods.go:61] "kube-controller-manager-addons-323619" [76dcaa92-c8de-4ea6-a31f-a93bc3945ef1] Running
	I1020 11:58:14.426225  143841 system_pods.go:61] "kube-ingress-dns-minikube" [d9751b14-7770-4371-9c4d-4b6fd14d08e7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:58:14.426231  143841 system_pods.go:61] "kube-proxy-7p6h8" [f0c76506-3962-4ef0-b263-17a2c091b935] Running
	I1020 11:58:14.426237  143841 system_pods.go:61] "kube-scheduler-addons-323619" [7da28c89-a31a-406c-986e-c73691cbb667] Running
	I1020 11:58:14.426245  143841 system_pods.go:61] "metrics-server-85b7d694d7-p578g" [efbbc581-2f4c-4fce-bdc8-f1da295b4b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:58:14.426258  143841 system_pods.go:61] "nvidia-device-plugin-daemonset-8bl6k" [f8e8140e-4e5f-4f90-ab0a-d58e0f710081] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:58:14.426275  143841 system_pods.go:61] "registry-6b586f9694-ztdx9" [e0f3051e-f382-4b73-bc54-fc3e72c133dc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:58:14.426286  143841 system_pods.go:61] "registry-creds-764b6fb674-pd9br" [c002ab26-a4ae-4e5f-a30b-bccee32cf709] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:58:14.426298  143841 system_pods.go:61] "registry-proxy-d9xww" [7621b093-dc68-4763-8bf1-6acf5e291d3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:58:14.426304  143841 system_pods.go:61] "snapshot-controller-7d9fbc56b8-476qn" [9801dc41-a4a5-45c0-a0ee-ed70b9988451] Pending
	I1020 11:58:14.426314  143841 system_pods.go:61] "snapshot-controller-7d9fbc56b8-b9krc" [c8f2ea96-9673-410f-9822-2b7222a9c380] Pending
	I1020 11:58:14.426321  143841 system_pods.go:61] "storage-provisioner" [cbbf88ad-b99e-4137-96e3-7ea228aab0c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:58:14.426333  143841 system_pods.go:74] duration metric: took 92.252767ms to wait for pod list to return data ...
	I1020 11:58:14.426350  143841 default_sa.go:34] waiting for default service account to be created ...
	I1020 11:58:14.437554  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 11:58:14.469075  143841 default_sa.go:45] found service account: "default"
	I1020 11:58:14.469114  143841 default_sa.go:55] duration metric: took 42.752462ms for default service account to be created ...
	I1020 11:58:14.469129  143841 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 11:58:14.508035  143841 system_pods.go:86] 17 kube-system pods found
	I1020 11:58:14.508071  143841 system_pods.go:89] "amd-gpu-device-plugin-6vxgv" [fc997290-0438-4790-bb90-fa014005eff8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1020 11:58:14.508077  143841 system_pods.go:89] "coredns-66bc5c9577-p84sc" [f8a58dc9-04b9-4a1c-9210-db7bc4d9d8b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:58:14.508087  143841 system_pods.go:89] "coredns-66bc5c9577-xxnb6" [387de95e-0fd5-462c-a8b8-ee5618f6d0bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 11:58:14.508091  143841 system_pods.go:89] "etcd-addons-323619" [c5acf381-856c-47f9-af72-af2ed78073ba] Running
	I1020 11:58:14.508096  143841 system_pods.go:89] "kube-apiserver-addons-323619" [6db284d6-7b41-477b-bc2b-8e52619d4a2f] Running
	I1020 11:58:14.508099  143841 system_pods.go:89] "kube-controller-manager-addons-323619" [76dcaa92-c8de-4ea6-a31f-a93bc3945ef1] Running
	I1020 11:58:14.508104  143841 system_pods.go:89] "kube-ingress-dns-minikube" [d9751b14-7770-4371-9c4d-4b6fd14d08e7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 11:58:14.508108  143841 system_pods.go:89] "kube-proxy-7p6h8" [f0c76506-3962-4ef0-b263-17a2c091b935] Running
	I1020 11:58:14.508111  143841 system_pods.go:89] "kube-scheduler-addons-323619" [7da28c89-a31a-406c-986e-c73691cbb667] Running
	I1020 11:58:14.508116  143841 system_pods.go:89] "metrics-server-85b7d694d7-p578g" [efbbc581-2f4c-4fce-bdc8-f1da295b4b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 11:58:14.508127  143841 system_pods.go:89] "nvidia-device-plugin-daemonset-8bl6k" [f8e8140e-4e5f-4f90-ab0a-d58e0f710081] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 11:58:14.508132  143841 system_pods.go:89] "registry-6b586f9694-ztdx9" [e0f3051e-f382-4b73-bc54-fc3e72c133dc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 11:58:14.508140  143841 system_pods.go:89] "registry-creds-764b6fb674-pd9br" [c002ab26-a4ae-4e5f-a30b-bccee32cf709] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 11:58:14.508147  143841 system_pods.go:89] "registry-proxy-d9xww" [7621b093-dc68-4763-8bf1-6acf5e291d3d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 11:58:14.508157  143841 system_pods.go:89] "snapshot-controller-7d9fbc56b8-476qn" [9801dc41-a4a5-45c0-a0ee-ed70b9988451] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 11:58:14.508161  143841 system_pods.go:89] "snapshot-controller-7d9fbc56b8-b9krc" [c8f2ea96-9673-410f-9822-2b7222a9c380] Pending
	I1020 11:58:14.508167  143841 system_pods.go:89] "storage-provisioner" [cbbf88ad-b99e-4137-96e3-7ea228aab0c0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 11:58:14.508174  143841 system_pods.go:126] duration metric: took 39.039057ms to wait for k8s-apps to be running ...
	I1020 11:58:14.508184  143841 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 11:58:14.508230  143841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 11:58:14.793438  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:14.793471  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:15.305127  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:15.310120  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:15.479683  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.866314314s)
	I1020 11:58:15.479764  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:15.479787  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:15.480118  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:15.480147  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:15.480160  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:15.480175  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:15.480184  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:15.480447  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:15.480461  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:15.480470  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:15.480472  143841 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-323619"
	I1020 11:58:15.482077  143841 out.go:179] * Verifying csi-hostpath-driver addon...
	I1020 11:58:15.484054  143841 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1020 11:58:15.524874  143841 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 11:58:15.524902  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:15.789334  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:15.790708  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:15.850388  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.320854065s)
	W1020 11:58:15.850482  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:15.850503  143841 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.281932388s)
	I1020 11:58:15.850516  143841 retry.go:31] will retry after 207.550249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:15.852016  143841 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1020 11:58:15.853229  143841 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 11:58:15.854185  143841 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1020 11:58:15.854202  143841 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1020 11:58:15.916539  143841 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1020 11:58:15.916563  143841 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1020 11:58:15.970197  143841 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:58:15.970220  143841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1020 11:58:15.991689  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.036276  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 11:58:16.058248  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:16.286479  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:16.286605  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:16.488998  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:16.786495  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:16.786623  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:16.894253  143841 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.385999455s)
	I1020 11:58:16.894284  143841 system_svc.go:56] duration metric: took 2.386095844s WaitForService to wait for kubelet
	I1020 11:58:16.894293  143841 kubeadm.go:586] duration metric: took 11.213828324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 11:58:16.894313  143841 node_conditions.go:102] verifying NodePressure condition ...
	I1020 11:58:16.894247  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.456635403s)
	I1020 11:58:16.894465  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:16.894488  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:16.894810  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:16.894828  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:16.894837  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:16.894835  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:16.894856  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:16.895123  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:16.895141  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:16.895124  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:16.898865  143841 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1020 11:58:16.898888  143841 node_conditions.go:123] node cpu capacity is 2
	I1020 11:58:16.898901  143841 node_conditions.go:105] duration metric: took 4.584718ms to run NodePressure ...
	I1020 11:58:16.898914  143841 start.go:241] waiting for startup goroutines ...
	I1020 11:58:16.989493  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.291979  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:17.292370  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:17.515570  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:17.573137  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.5368091s)
	I1020 11:58:17.573218  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:17.573234  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:17.573581  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:17.573602  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:17.573612  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:58:17.573620  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:58:17.573909  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:58:17.573933  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:58:17.573934  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:58:17.574818  143841 addons.go:479] Verifying addon gcp-auth=true in "addons-323619"
	I1020 11:58:17.576985  143841 out.go:179] * Verifying gcp-auth addon...
	I1020 11:58:17.579106  143841 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1020 11:58:17.602363  143841 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1020 11:58:17.602387  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:17.789186  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:17.789478  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:17.992423  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:18.085503  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:18.255157  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.196839999s)
	W1020 11:58:18.255219  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:18.255250  143841 retry.go:31] will retry after 706.875707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:18.290015  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:18.292010  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:18.493739  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:18.586131  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:18.788788  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:18.788930  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:18.963239  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:18.991008  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:19.086541  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:19.288221  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:19.292760  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:19.489053  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:19.586711  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:19.790311  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:19.791773  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:19.989962  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:20.084733  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:20.147502  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.184213227s)
	W1020 11:58:20.147546  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:20.147575  143841 retry.go:31] will retry after 937.828683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:20.292845  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:20.293509  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:20.490193  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:20.583178  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:20.784556  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:20.786511  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:20.988637  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:21.085789  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:21.085898  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:21.287992  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:21.289730  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:21.490470  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:21.583515  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:21.787665  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:21.787689  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:21.987718  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:22.083682  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:22.167439  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.08157411s)
	W1020 11:58:22.167483  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:22.167508  143841 retry.go:31] will retry after 915.736532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:22.285088  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:22.288195  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:22.487743  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:22.583786  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:22.786700  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:22.786847  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:22.991028  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:23.083383  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:23.085287  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:23.285903  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:23.287322  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:23.488290  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:23.584313  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:23.789036  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:23.789057  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:23.987681  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:24.083186  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:24.287791  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:24.288117  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:24.330218  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.246773369s)
	W1020 11:58:24.330257  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:24.330281  143841 retry.go:31] will retry after 1.895952789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:24.491179  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:24.601203  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:24.785901  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:24.785918  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:24.988697  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:25.097336  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:25.288567  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:25.288626  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:25.490914  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:25.584879  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:25.789564  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:25.791155  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:25.990472  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:26.084886  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:26.227188  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:26.285870  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:26.286035  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:26.491785  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:26.582601  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:26.784671  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:26.786659  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:26.988206  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1020 11:58:27.049267  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:27.049299  143841 retry.go:31] will retry after 3.909507755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:27.082153  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:27.285729  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:27.285913  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:27.489458  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:27.582888  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:27.784808  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:27.784875  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:28.197462  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:28.198693  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:28.284074  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:28.285106  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:28.489331  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:28.584207  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:28.876946  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:28.879101  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:28.993084  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:29.082423  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:29.283843  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:29.284855  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:29.488767  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:29.585243  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:29.786179  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:29.786562  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:29.988578  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:30.084110  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:30.454736  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:30.454781  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:30.620839  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:30.623200  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:30.785557  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:30.787071  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:30.959838  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:30.987996  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:31.082849  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:31.289907  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:31.290021  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:31.488020  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:31.583611  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:31.785302  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:31.785554  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:58:31.788946  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:31.788980  143841 retry.go:31] will retry after 6.02702458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:31.988748  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:32.083979  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:32.284702  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:32.284941  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:32.488769  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:32.582464  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:32.784058  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:32.785104  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:32.987777  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:33.083231  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:33.284915  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:33.284924  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:33.489526  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:33.582336  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:33.785571  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:33.797384  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:33.994353  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:34.084428  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:34.283682  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:34.285838  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:34.489152  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:34.582993  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:34.784126  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:34.785440  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:34.987548  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:35.082684  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:35.285376  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:35.285435  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:35.490121  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:35.583316  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:35.784310  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:35.785453  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:35.988235  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:36.081944  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:36.284159  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:36.285483  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:36.490124  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:36.583550  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:36.788815  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:36.789686  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:36.991604  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:37.083530  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:37.285118  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:37.288417  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:37.489911  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:37.583740  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:37.785380  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:37.785680  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:37.816771  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:37.990029  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:38.085611  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:38.285329  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:38.286672  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:38.488480  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:38.585134  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:58:38.782976  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:38.783018  143841 retry.go:31] will retry after 7.673572351s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:38.784688  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:38.788677  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:38.988146  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:39.083942  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:39.284763  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:39.285842  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:39.489782  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:39.599234  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:39.818229  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:39.818444  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:39.988206  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:40.082874  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:40.284641  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:40.285029  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:40.488279  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:40.583283  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:40.788206  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:40.788264  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:40.992229  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:41.082492  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:41.284602  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:41.284943  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:41.489008  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:41.583533  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:41.786793  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:41.790274  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:41.987888  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:42.091878  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:42.284890  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:42.285616  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:42.488912  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:42.585628  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:42.784491  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:42.784754  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:42.987969  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:43.082919  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:43.284365  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:43.284421  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:43.487686  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:43.583101  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:43.784216  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:43.784546  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:43.987819  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:44.083928  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:44.284672  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:44.284900  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:44.488327  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:44.583423  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:44.784186  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:44.784669  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:44.988073  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:45.084601  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:45.285854  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:45.289147  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:45.489799  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:45.583137  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:45.786690  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:45.787967  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:45.989508  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:46.085162  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:46.289737  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:46.289961  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:46.457205  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:46.490135  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:46.589553  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:46.784330  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:46.785783  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:46.987145  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:47.083841  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:58:47.136467  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:47.136495  143841 retry.go:31] will retry after 7.532864165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:47.283950  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:47.285372  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:47.488365  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:47.582611  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:47.783653  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:47.784887  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:47.988942  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:48.082779  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:48.285658  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:48.285722  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:48.496209  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:48.587951  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:48.787366  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:48.787703  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:48.989987  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:49.083377  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:49.286553  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:49.286871  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:49.493511  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:49.592929  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:49.786595  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:49.786632  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:49.990463  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:50.082088  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:50.286095  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:50.286794  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:50.488950  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:50.582854  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:50.787342  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:50.787814  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:50.987598  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:51.082475  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:51.283853  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:51.285850  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:51.490202  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:51.582959  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:51.785664  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:51.785874  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:51.987795  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:52.083418  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:52.286751  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:52.288295  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:52.487731  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:52.586990  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:52.786001  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:52.786276  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:52.992014  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:53.083742  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:53.286695  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:53.286886  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:53.488852  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:53.583160  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:53.789001  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:53.790121  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:53.988187  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:54.084115  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:54.290601  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:54.291254  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:54.490163  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:54.585147  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:54.670342  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:58:54.787943  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:54.788552  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:54.990738  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:55.177065  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:55.286366  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:55.286956  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:55.489441  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:55.583651  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 11:58:55.663428  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:55.663477  143841 retry.go:31] will retry after 19.44064776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:58:55.785842  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:55.786396  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:55.989408  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:56.082349  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:56.285394  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:56.285872  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:56.488597  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:56.582485  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:56.784750  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:56.785095  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:56.987359  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:57.082869  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:57.285284  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:57.285785  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:57.487826  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:57.582725  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:57.783938  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:57.784018  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:57.988422  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:58.082697  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:58.284237  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:58.284459  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:58.488156  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:58.583241  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:58.785393  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:58.786959  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:58.988302  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:59.082741  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:59.285228  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:59.285460  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:59.487123  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:58:59.585391  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:58:59.785159  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:58:59.786070  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:58:59.988686  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:00.083463  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:00.285457  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:00.287052  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:59:00.489126  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:00.584573  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:00.785028  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:59:00.785262  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:00.989301  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:01.082068  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:01.285357  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:01.285420  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 11:59:01.490624  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:01.582930  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:01.788615  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:01.788710  143841 kapi.go:107] duration metric: took 47.507683081s to wait for kubernetes.io/minikube-addons=registry ...
	I1020 11:59:01.989430  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:02.083862  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:02.458096  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:02.491471  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:02.589423  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:02.784660  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:02.987848  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:03.083431  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:03.284832  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:03.488274  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:03.582479  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:03.785680  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:03.989389  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:04.083620  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:04.285847  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:04.490071  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:04.584332  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:04.792716  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:04.987550  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:05.083918  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:05.286415  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:05.488893  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:05.583628  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:05.785549  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:05.989301  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:06.082571  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:06.284140  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:06.487366  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:06.582737  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:06.785128  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:06.987965  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:07.082552  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:07.283717  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:07.488443  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:07.584280  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:07.788280  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:07.987087  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:08.082288  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:08.283724  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:08.488841  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:08.582599  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:08.788262  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:08.988079  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:09.083000  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:09.284648  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:09.488862  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:09.582628  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:09.785185  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:09.988509  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:10.082529  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:10.284386  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:10.489293  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:10.582064  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:10.784563  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:10.989343  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:11.083829  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:11.285248  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:11.488376  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:11.581962  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:11.785366  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:11.988254  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:12.084580  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:12.287173  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:12.487224  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:12.582092  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:12.887904  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:12.989155  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:13.088944  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:13.284302  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:13.487634  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:13.582820  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:13.784255  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:13.987639  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:14.082556  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:14.283713  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:14.487940  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:14.582575  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:14.783976  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:14.989265  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:15.082001  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:15.105181  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:59:15.285134  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:15.486981  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:15.582936  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:15.784883  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:15.989137  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:16.082261  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:16.133324  143841 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.028094548s)
	W1020 11:59:16.133386  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:59:16.133437  143841 retry.go:31] will retry after 28.193000636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:59:16.284894  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:16.489252  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:16.582129  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:16.787422  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:16.991189  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:17.084850  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:17.287175  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:17.491106  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:17.592781  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:17.788685  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:17.990499  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:18.083581  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:18.284578  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:18.488098  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:18.584955  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:18.788072  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:18.989241  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:19.084450  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:19.286881  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:19.488829  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:19.605362  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:19.786812  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:19.988755  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:20.083260  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:20.285066  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:20.487812  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:20.583009  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:20.786207  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:20.991777  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:21.084686  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:21.283699  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:21.489518  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:21.584081  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:21.786033  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:21.989531  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:22.082948  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:22.286123  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:22.488730  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:22.583663  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:22.785842  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:22.990592  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:23.088843  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:23.284829  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:23.488070  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:23.584081  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:23.786795  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:23.988275  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:24.083349  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:24.286464  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:24.489036  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:24.583538  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:24.786411  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:24.989826  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:25.084969  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:25.286512  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:25.489185  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:25.582932  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:25.788628  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:25.994571  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:26.093827  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:26.284561  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:26.489757  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:26.584003  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:26.788817  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:26.990328  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:27.090235  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:27.417112  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:27.495298  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:27.591587  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:27.787325  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:27.990079  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:28.085808  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:28.285300  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:28.489430  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:28.583320  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:28.788833  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:28.990930  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:29.086918  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:29.285038  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:29.489168  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:29.584777  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:29.783920  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:29.996158  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:30.094772  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:30.290042  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:30.488454  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:30.582160  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:30.786843  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:30.988285  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:31.084423  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:31.287283  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:31.488001  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:31.583071  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:31.783853  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:31.988743  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:32.085218  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:32.287303  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:32.488136  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:32.583221  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:32.787620  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:32.990960  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:33.086569  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:33.284602  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:33.488666  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:33.583587  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:33.790313  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:33.988233  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 11:59:34.083098  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:34.284388  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:34.488122  143841 kapi.go:107] duration metric: took 1m19.004067336s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1020 11:59:34.583091  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:34.784497  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:35.082370  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:35.284480  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:35.582944  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:35.784829  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:36.083327  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:36.284898  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:36.582836  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:36.783587  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:37.115646  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:37.284387  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:37.582075  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:37.784563  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:38.082459  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:38.284027  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:38.582100  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:38.784326  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:39.082312  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:39.284064  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:39.582646  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:39.783836  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:40.083073  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:40.284818  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:40.583206  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:40.784125  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:41.082794  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:41.285323  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:41.582263  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:41.783833  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:42.082958  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:42.285012  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:42.582117  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:42.784382  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:43.083159  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:43.285523  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:43.582889  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:43.784294  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:44.082192  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:44.284613  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:44.327526  143841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 11:59:44.584339  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:44.785746  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 11:59:44.989271  143841 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 11:59:44.989396  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:59:44.989434  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:59:44.989735  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:59:44.989758  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 11:59:44.989769  143841 main.go:141] libmachine: Making call to close driver server
	I1020 11:59:44.989779  143841 main.go:141] libmachine: (addons-323619) Calling .Close
	I1020 11:59:44.989787  143841 main.go:141] libmachine: (addons-323619) DBG | Closing plugin on server side
	I1020 11:59:44.990024  143841 main.go:141] libmachine: Successfully made call to close driver server
	I1020 11:59:44.990037  143841 main.go:141] libmachine: Making call to close connection to plugin binary
	W1020 11:59:44.990154  143841 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1020 11:59:45.082141  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:45.285105  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:45.582860  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:45.784682  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:46.083759  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:46.284218  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:46.582443  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:46.783542  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:47.082857  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:47.285126  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:47.582325  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:47.785290  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:48.082263  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:48.283871  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:48.582925  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:48.784516  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:49.082428  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:49.284576  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:49.583047  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:49.784315  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:50.082284  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:50.285927  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:50.583003  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:50.785383  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:51.083075  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:51.284343  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:51.582812  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:51.784520  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:52.082547  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:52.284249  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:52.582395  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:52.784311  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:53.082759  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:53.284354  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:53.583050  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:53.784412  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:54.083042  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:54.284394  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:54.582527  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:54.783991  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:55.083287  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:55.284026  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:55.582948  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:55.784575  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:56.082800  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:56.283898  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:56.582954  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:56.784782  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:57.082666  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:57.284589  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:57.582849  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:57.784911  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:58.083148  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:58.284493  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:58.582368  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:58.783640  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:59.082883  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:59.285848  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 11:59:59.583554  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 11:59:59.784026  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:00.082878  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:00.285244  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:00.582800  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:00.784453  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:01.082569  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:01.284567  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:01.583045  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:01.784670  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:02.083325  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:02.283985  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:02.582728  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:02.784260  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:03.082770  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:03.344175  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:03.583370  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:03.784841  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:04.083115  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:04.284995  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:04.583586  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:04.783807  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:05.082740  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:05.284259  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:05.582606  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:05.784986  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:06.083998  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:06.285202  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:06.582332  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:06.784272  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:07.083078  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:07.285026  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:07.585573  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:07.784555  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:08.083292  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:08.283911  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:08.584276  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:08.785310  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:09.082625  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:09.284227  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:09.583236  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:09.784138  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:10.082458  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:10.284867  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:10.583486  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:10.785023  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:11.082707  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:11.285605  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:11.583629  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:11.784933  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:12.083169  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:12.285309  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:12.583619  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:12.784347  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:13.082459  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:13.284296  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:13.583022  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:13.784817  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:14.083304  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:14.283696  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:14.583116  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:14.785721  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:15.083248  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:15.284130  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:15.583728  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:15.785247  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:16.082329  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:16.283982  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:16.582971  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:16.785351  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:17.083059  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:17.285002  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:17.582787  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:17.784658  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:18.083244  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:18.283657  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:18.583603  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:18.784151  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:19.082826  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:19.284780  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:19.583276  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:19.784109  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:20.082744  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:20.284185  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:20.582646  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:20.784136  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:21.082568  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:21.284221  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:21.583109  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:21.785090  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:22.083288  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:22.285669  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:22.583117  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:22.788612  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:23.083257  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:23.283984  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:23.584112  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:23.785516  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:24.083877  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:24.284425  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:24.582813  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:24.784696  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:25.083016  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:25.285057  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:25.583734  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:25.785499  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:26.083118  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:26.285119  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:26.582011  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:26.784423  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:27.082337  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:27.283693  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:27.583586  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:27.785214  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:28.082818  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:28.284625  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:28.583372  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:28.783952  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:29.083413  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:29.283735  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:29.587593  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:29.784348  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:30.082734  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:30.284871  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:30.583181  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:30.784584  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:31.083427  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:31.284102  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:31.587445  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:31.788634  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:32.082989  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:32.287667  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:32.587868  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:32.784158  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:33.083245  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:33.286054  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:33.585482  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:33.786183  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:34.084314  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:34.283968  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:34.583945  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:34.787537  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:35.089092  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:35.288792  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:35.586667  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:35.783761  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:36.083648  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:36.284552  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:36.582965  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:36.785006  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:37.083330  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:37.284296  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:37.582910  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:37.788036  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:38.082705  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:38.284213  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:38.583299  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:38.787045  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:39.084786  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:39.287233  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:39.639972  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:39.785664  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:40.094034  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:40.286467  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:40.583503  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:40.784052  143841 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:00:41.082367  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:41.284334  143841 kapi.go:107] duration metric: took 2m27.004021746s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1020 12:00:41.623726  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:42.083333  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:42.584243  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:43.083527  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:43.582648  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:44.229708  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:44.585060  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:45.083753  143841 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:00:45.582724  143841 kapi.go:107] duration metric: took 2m28.003613223s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1020 12:00:45.584657  143841 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-323619 cluster.
	I1020 12:00:45.585741  143841 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1020 12:00:45.586855  143841 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1020 12:00:45.587946  143841 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, registry-creds, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1020 12:00:45.589035  143841 addons.go:514] duration metric: took 2m39.908516213s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns nvidia-device-plugin registry-creds storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1020 12:00:45.589081  143841 start.go:246] waiting for cluster config update ...
	I1020 12:00:45.589103  143841 start.go:255] writing updated cluster config ...
	I1020 12:00:45.589478  143841 ssh_runner.go:195] Run: rm -f paused
	I1020 12:00:45.595515  143841 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:00:45.599380  143841 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xxnb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.605858  143841 pod_ready.go:94] pod "coredns-66bc5c9577-xxnb6" is "Ready"
	I1020 12:00:45.605883  143841 pod_ready.go:86] duration metric: took 6.463312ms for pod "coredns-66bc5c9577-xxnb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.608174  143841 pod_ready.go:83] waiting for pod "etcd-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.612682  143841 pod_ready.go:94] pod "etcd-addons-323619" is "Ready"
	I1020 12:00:45.612703  143841 pod_ready.go:86] duration metric: took 4.509325ms for pod "etcd-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.614987  143841 pod_ready.go:83] waiting for pod "kube-apiserver-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.620747  143841 pod_ready.go:94] pod "kube-apiserver-addons-323619" is "Ready"
	I1020 12:00:45.620765  143841 pod_ready.go:86] duration metric: took 5.760122ms for pod "kube-apiserver-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:45.623135  143841 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:46.000078  143841 pod_ready.go:94] pod "kube-controller-manager-addons-323619" is "Ready"
	I1020 12:00:46.000110  143841 pod_ready.go:86] duration metric: took 376.952253ms for pod "kube-controller-manager-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:46.199811  143841 pod_ready.go:83] waiting for pod "kube-proxy-7p6h8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:46.600681  143841 pod_ready.go:94] pod "kube-proxy-7p6h8" is "Ready"
	I1020 12:00:46.600713  143841 pod_ready.go:86] duration metric: took 400.856419ms for pod "kube-proxy-7p6h8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:46.800731  143841 pod_ready.go:83] waiting for pod "kube-scheduler-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:47.200553  143841 pod_ready.go:94] pod "kube-scheduler-addons-323619" is "Ready"
	I1020 12:00:47.200585  143841 pod_ready.go:86] duration metric: took 399.826421ms for pod "kube-scheduler-addons-323619" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:00:47.200597  143841 pod_ready.go:40] duration metric: took 1.605051767s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:00:47.248626  143841 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1020 12:00:47.250354  143841 out.go:179] * Done! kubectl is now configured to use "addons-323619" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.013589807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72444dfb-14d3-478b-9dde-a18b42884a80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.013931574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03eeb25e61d083a79372f3dbea051a20bf0ea319a44d028597961be14f41195e,PodSandboxId:b031288ae5421ec579b2acddd8254599021b3bed1c9d3f2573020a42ff2eb63d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760961672574350057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d4b8ae7-8624-40f6-aef7-014cc379dda1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50560cc1e0cb9987b62a1e49940b59ff777e7380c0b849705e3990963fb6091,PodSandboxId:e2f5066fe2ee355ab78e8671ad8824bb45a1931dcff2d63105722a27295e357d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760961651864398950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f01822ea-7da0-4ac7-a696-823399920504,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f481b3cb9c52d5712c0797b06596ba72fc56d33db977ef641072b19c583b1d83,PodSandboxId:bd84fcb9290c4d2f3cd3a3e290f6db294a5e4a14a3f94f19d1095550cd9b7cf3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760961639990075537,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xcnk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7b08f97-89cd-464a-8f56-d5686c608cc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c102e0ea2af66fcd9504dd089120958d3351e473f05d091c0b4ba63f066b173,PodSandboxId:6d90b975af2abf13fe8cdafc00e4bfac5176e670893e7348febb8511d95ff37d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760961570732921557,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9ngs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c4e39fd-688f-4a71-a417-512841eb85a9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdb71791386fccc06d88fa9682180e1386d85ae8b13aed2b928ba7d233c70ca,PodSandboxId:1459896dec6c45b7b339307202400b3830740ac53ffd3b9c136a684ecd3c949e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760961560266612021,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdjzp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70b3c68b-012e-4d90-b9e1-91ee4b0d001d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c235383ff671a428868c45a215a3da471ae57a964105b3c0a339631d2a143513,PodSandboxId:ece2bf916d7804c67bfd6b5b3ce25c63374d2ebecb17d0f23a9a33d2fb3608aa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760961547519250328,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mmzsg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: e208c3aa-f9c8-4fc1-b8c3-ba4f9c68dbdf,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7af619b899629920ebd310fcc3ba59e452fcc1a1e0897f7c2ebbfd9e04019e6,PodSandboxId:9b59b5a0e1e6c4a6bece93d20935506cc588665bc0b30b2527b7ee491ee2a716,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760961536267327592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9751b14-7770-4371-9c4d-4b6fd14d08e7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a65b5d5c37767cd7589c205d7b47d6ef03bea711c7a03da14cfa94a58e567a,PodSandboxId:5f36745d46e5ac5a20613b8f27fbb52f2f26072976d66b5abe844b089e2bde51,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760961514348938794,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6vxgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc997290-0438-4790-bb90-fa014005eff8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d652c31dd5e2b8abb5390317b679a80913ebc2e368e9b19ef6dfdbd2bcf4b16,PodSandboxId:8fccc8014e838524ddce526ac6a6be55905fcaa949cd4603a3bfdfeecb2370da,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760961493649215393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbbf88ad-b99e-4137-96e3-7ea228aab0c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9f20c41dd7051a3e0da6f3f3372c629a392133205063e60502fc3593e78d6a,PodSandboxId:75f00ee67ae15f7c027c6abd2bd28f56231fa618f5e64761c1f711cbbb9b8515,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760961487251707714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xxnb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 387de95e-0fd5-462c-a8b8-ee5618f6d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9894cef8b26b6c54fc7b9cdc71a5b91abd432c1aec50ed60fceb57908c486c16,PodSandboxId:c20492492a9b7b8df6d5ec27fef7739f1661f99c45495adcc74e9f4955a311d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760961486738882782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p6h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c76506-3962-4ef0-b263-17a2c091b935,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3a9087ef5c0c7b00bdca9cec721afe7e9885b29c787c285b1b71bcf9b1ec5f,PodSandboxId:07cb796c50072a843af98d4551859fcec6837b3a7341b531e449e71d01635a30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760961475209874982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec51eadc2e41ef7b50afce7c50b6fb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b343307a2f6b940967e6c33f2a028ff7ee13db40232c7002e0cb5bb69b213,PodSandboxId:40d040607d4076fb250b46777d4e635a18cd17835c5343bb812fe77d7b5a216a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760961475215234862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad1eb7e038be53e7ecc9d7061930026,},Annotations:map[string]st
ring{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73226f4fe386543d72baab2ed44748279e4a66a6effc853dd69f1c5d1e395640,PodSandboxId:9d7382d5cf4d78fec7105ab0ed355d6ac41da0f2d530973d4a8d52b8a3e551ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760961475193861863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d32bbb1a77f46a11183016030dc12773,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8582bc6e842938cc6a1cca949a36eb484d0549bfb934e6f244d0067bd9c0f96b,PodSandboxId:02473f760f4029c438c276c71ea7da8cba5fcb98ab7985e441938770721a76d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760961475185568848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6fddec4a33b01655e266ab7e5abf6a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72444dfb-14d3-478b-9dde-a18b42884a80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.058118122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=790d40ff-757c-41f1-8d43-63563593818a name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.058364662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=790d40ff-757c-41f1-8d43-63563593818a name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.059788548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1c281d2-3ec1-403c-a5f8-5f0878ebdfb2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.061340600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760961818061309051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1c281d2-3ec1-403c-a5f8-5f0878ebdfb2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.062293781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=916fdbb8-fdbd-4b1e-af49-923dacb6b54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.062395150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=916fdbb8-fdbd-4b1e-af49-923dacb6b54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.063178531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03eeb25e61d083a79372f3dbea051a20bf0ea319a44d028597961be14f41195e,PodSandboxId:b031288ae5421ec579b2acddd8254599021b3bed1c9d3f2573020a42ff2eb63d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760961672574350057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d4b8ae7-8624-40f6-aef7-014cc379dda1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50560cc1e0cb9987b62a1e49940b59ff777e7380c0b849705e3990963fb6091,PodSandboxId:e2f5066fe2ee355ab78e8671ad8824bb45a1931dcff2d63105722a27295e357d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760961651864398950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f01822ea-7da0-4ac7-a696-823399920504,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f481b3cb9c52d5712c0797b06596ba72fc56d33db977ef641072b19c583b1d83,PodSandboxId:bd84fcb9290c4d2f3cd3a3e290f6db294a5e4a14a3f94f19d1095550cd9b7cf3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760961639990075537,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xcnk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7b08f97-89cd-464a-8f56-d5686c608cc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c102e0ea2af66fcd9504dd089120958d3351e473f05d091c0b4ba63f066b173,PodSandboxId:6d90b975af2abf13fe8cdafc00e4bfac5176e670893e7348febb8511d95ff37d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760961570732921557,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9ngs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c4e39fd-688f-4a71-a417-512841eb85a9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdb71791386fccc06d88fa9682180e1386d85ae8b13aed2b928ba7d233c70ca,PodSandboxId:1459896dec6c45b7b339307202400b3830740ac53ffd3b9c136a684ecd3c949e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760961560266612021,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdjzp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70b3c68b-012e-4d90-b9e1-91ee4b0d001d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c235383ff671a428868c45a215a3da471ae57a964105b3c0a339631d2a143513,PodSandboxId:ece2bf916d7804c67bfd6b5b3ce25c63374d2ebecb17d0f23a9a33d2fb3608aa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760961547519250328,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mmzsg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: e208c3aa-f9c8-4fc1-b8c3-ba4f9c68dbdf,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7af619b899629920ebd310fcc3ba59e452fcc1a1e0897f7c2ebbfd9e04019e6,PodSandboxId:9b59b5a0e1e6c4a6bece93d20935506cc588665bc0b30b2527b7ee491ee2a716,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760961536267327592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9751b14-7770-4371-9c4d-4b6fd14d08e7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a65b5d5c37767cd7589c205d7b47d6ef03bea711c7a03da14cfa94a58e567a,PodSandboxId:5f36745d46e5ac5a20613b8f27fbb52f2f26072976d66b5abe844b089e2bde51,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760961514348938794,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6vxgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc997290-0438-4790-bb90-fa014005eff8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d652c31dd5e2b8abb5390317b679a80913ebc2e368e9b19ef6dfdbd2bcf4b16,PodSandboxId:8fccc8014e838524ddce526ac6a6be55905fcaa949cd4603a3bfdfeecb2370da,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760961493649215393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbbf88ad-b99e-4137-96e3-7ea228aab0c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9f20c41dd7051a3e0da6f3f3372c629a392133205063e60502fc3593e78d6a,PodSandboxId:75f00ee67ae15f7c027c6abd2bd28f56231fa618f5e64761c1f711cbbb9b8515,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760961487251707714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xxnb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 387de95e-0fd5-462c-a8b8-ee5618f6d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9894cef8b26b6c54fc7b9cdc71a5b91abd432c1aec50ed60fceb57908c486c16,PodSandboxId:c20492492a9b7b8df6d5ec27fef7739f1661f99c45495adcc74e9f4955a311d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760961486738882782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p6h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c76506-3962-4ef0-b263-17a2c091b935,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3a9087ef5c0c7b00bdca9cec721afe7e9885b29c787c285b1b71bcf9b1ec5f,PodSandboxId:07cb796c50072a843af98d4551859fcec6837b3a7341b531e449e71d01635a30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760961475209874982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec51eadc2e41ef7b50afce7c50b6fb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b343307a2f6b940967e6c33f2a028ff7ee13db40232c7002e0cb5bb69b213,PodSandboxId:40d040607d4076fb250b46777d4e635a18cd17835c5343bb812fe77d7b5a216a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760961475215234862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad1eb7e038be53e7ecc9d7061930026,},Annotations:map[string]st
ring{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73226f4fe386543d72baab2ed44748279e4a66a6effc853dd69f1c5d1e395640,PodSandboxId:9d7382d5cf4d78fec7105ab0ed355d6ac41da0f2d530973d4a8d52b8a3e551ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760961475193861863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d32bbb1a77f46a11183016030dc12773,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8582bc6e842938cc6a1cca949a36eb484d0549bfb934e6f244d0067bd9c0f96b,PodSandboxId:02473f760f4029c438c276c71ea7da8cba5fcb98ab7985e441938770721a76d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760961475185568848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6fddec4a33b01655e266ab7e5abf6a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=916fdbb8-fdbd-4b1e-af49-923dacb6b54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.102212694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebdce768-48a9-4217-b21a-f1224506eea1 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.102312155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebdce768-48a9-4217-b21a-f1224506eea1 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.103833520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8af50c84-9d8a-4aad-84cc-761a3e75e0c1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.105081392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760961818105053883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8af50c84-9d8a-4aad-84cc-761a3e75e0c1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.105974413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f36a3359-d38b-4820-b471-a33b9e9a79f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.106073969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f36a3359-d38b-4820-b471-a33b9e9a79f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.107113548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03eeb25e61d083a79372f3dbea051a20bf0ea319a44d028597961be14f41195e,PodSandboxId:b031288ae5421ec579b2acddd8254599021b3bed1c9d3f2573020a42ff2eb63d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760961672574350057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d4b8ae7-8624-40f6-aef7-014cc379dda1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50560cc1e0cb9987b62a1e49940b59ff777e7380c0b849705e3990963fb6091,PodSandboxId:e2f5066fe2ee355ab78e8671ad8824bb45a1931dcff2d63105722a27295e357d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760961651864398950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f01822ea-7da0-4ac7-a696-823399920504,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f481b3cb9c52d5712c0797b06596ba72fc56d33db977ef641072b19c583b1d83,PodSandboxId:bd84fcb9290c4d2f3cd3a3e290f6db294a5e4a14a3f94f19d1095550cd9b7cf3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760961639990075537,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xcnk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7b08f97-89cd-464a-8f56-d5686c608cc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c102e0ea2af66fcd9504dd089120958d3351e473f05d091c0b4ba63f066b173,PodSandboxId:6d90b975af2abf13fe8cdafc00e4bfac5176e670893e7348febb8511d95ff37d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760961570732921557,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9ngs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c4e39fd-688f-4a71-a417-512841eb85a9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdb71791386fccc06d88fa9682180e1386d85ae8b13aed2b928ba7d233c70ca,PodSandboxId:1459896dec6c45b7b339307202400b3830740ac53ffd3b9c136a684ecd3c949e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760961560266612021,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdjzp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70b3c68b-012e-4d90-b9e1-91ee4b0d001d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c235383ff671a428868c45a215a3da471ae57a964105b3c0a339631d2a143513,PodSandboxId:ece2bf916d7804c67bfd6b5b3ce25c63374d2ebecb17d0f23a9a33d2fb3608aa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760961547519250328,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mmzsg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: e208c3aa-f9c8-4fc1-b8c3-ba4f9c68dbdf,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7af619b899629920ebd310fcc3ba59e452fcc1a1e0897f7c2ebbfd9e04019e6,PodSandboxId:9b59b5a0e1e6c4a6bece93d20935506cc588665bc0b30b2527b7ee491ee2a716,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760961536267327592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9751b14-7770-4371-9c4d-4b6fd14d08e7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a65b5d5c37767cd7589c205d7b47d6ef03bea711c7a03da14cfa94a58e567a,PodSandboxId:5f36745d46e5ac5a20613b8f27fbb52f2f26072976d66b5abe844b089e2bde51,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760961514348938794,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6vxgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc997290-0438-4790-bb90-fa014005eff8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d652c31dd5e2b8abb5390317b679a80913ebc2e368e9b19ef6dfdbd2bcf4b16,PodSandboxId:8fccc8014e838524ddce526ac6a6be55905fcaa949cd4603a3bfdfeecb2370da,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760961493649215393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbbf88ad-b99e-4137-96e3-7ea228aab0c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9f20c41dd7051a3e0da6f3f3372c629a392133205063e60502fc3593e78d6a,PodSandboxId:75f00ee67ae15f7c027c6abd2bd28f56231fa618f5e64761c1f711cbbb9b8515,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760961487251707714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xxnb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 387de95e-0fd5-462c-a8b8-ee5618f6d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9894cef8b26b6c54fc7b9cdc71a5b91abd432c1aec50ed60fceb57908c486c16,PodSandboxId:c20492492a9b7b8df6d5ec27fef7739f1661f99c45495adcc74e9f4955a311d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760961486738882782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p6h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c76506-3962-4ef0-b263-17a2c091b935,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3a9087ef5c0c7b00bdca9cec721afe7e9885b29c787c285b1b71bcf9b1ec5f,PodSandboxId:07cb796c50072a843af98d4551859fcec6837b3a7341b531e449e71d01635a30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760961475209874982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec51eadc2e41ef7b50afce7c50b6fb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b343307a2f6b940967e6c33f2a028ff7ee13db40232c7002e0cb5bb69b213,PodSandboxId:40d040607d4076fb250b46777d4e635a18cd17835c5343bb812fe77d7b5a216a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760961475215234862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad1eb7e038be53e7ecc9d7061930026,},Annotations:map[string]st
ring{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73226f4fe386543d72baab2ed44748279e4a66a6effc853dd69f1c5d1e395640,PodSandboxId:9d7382d5cf4d78fec7105ab0ed355d6ac41da0f2d530973d4a8d52b8a3e551ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760961475193861863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d32bbb1a77f46a11183016030dc12773,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8582bc6e842938cc6a1cca949a36eb484d0549bfb934e6f244d0067bd9c0f96b,PodSandboxId:02473f760f4029c438c276c71ea7da8cba5fcb98ab7985e441938770721a76d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760961475185568848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6fddec4a33b01655e266ab7e5abf6a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f36a3359-d38b-4820-b471-a33b9e9a79f7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.151891445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=694cf3d0-a94d-44ce-9bf7-6312a38007db name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.151984158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=694cf3d0-a94d-44ce-9bf7-6312a38007db name=/runtime.v1.RuntimeService/Version
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.153204776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fee95e1-3422-499b-83ff-0d667b3058aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.154442168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760961818154415546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fee95e1-3422-499b-83ff-0d667b3058aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.155307705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e90d30f6-ce89-496d-aa8c-ea772fa28d2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.155427083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e90d30f6-ce89-496d-aa8c-ea772fa28d2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.155799357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03eeb25e61d083a79372f3dbea051a20bf0ea319a44d028597961be14f41195e,PodSandboxId:b031288ae5421ec579b2acddd8254599021b3bed1c9d3f2573020a42ff2eb63d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760961672574350057,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d4b8ae7-8624-40f6-aef7-014cc379dda1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50560cc1e0cb9987b62a1e49940b59ff777e7380c0b849705e3990963fb6091,PodSandboxId:e2f5066fe2ee355ab78e8671ad8824bb45a1931dcff2d63105722a27295e357d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760961651864398950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f01822ea-7da0-4ac7-a696-823399920504,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f481b3cb9c52d5712c0797b06596ba72fc56d33db977ef641072b19c583b1d83,PodSandboxId:bd84fcb9290c4d2f3cd3a3e290f6db294a5e4a14a3f94f19d1095550cd9b7cf3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760961639990075537,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-xcnk9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7b08f97-89cd-464a-8f56-d5686c608cc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6c102e0ea2af66fcd9504dd089120958d3351e473f05d091c0b4ba63f066b173,PodSandboxId:6d90b975af2abf13fe8cdafc00e4bfac5176e670893e7348febb8511d95ff37d,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760961570732921557,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9ngs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c4e39fd-688f-4a71-a417-512841eb85a9,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdb71791386fccc06d88fa9682180e1386d85ae8b13aed2b928ba7d233c70ca,PodSandboxId:1459896dec6c45b7b339307202400b3830740ac53ffd3b9c136a684ecd3c949e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760961560266612021,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qdjzp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 70b3c68b-012e-4d90-b9e1-91ee4b0d001d,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c235383ff671a428868c45a215a3da471ae57a964105b3c0a339631d2a143513,PodSandboxId:ece2bf916d7804c67bfd6b5b3ce25c63374d2ebecb17d0f23a9a33d2fb3608aa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760961547519250328,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-mmzsg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: e208c3aa-f9c8-4fc1-b8c3-ba4f9c68dbdf,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7af619b899629920ebd310fcc3ba59e452fcc1a1e0897f7c2ebbfd9e04019e6,PodSandboxId:9b59b5a0e1e6c4a6bece93d20935506cc588665bc0b30b2527b7ee491ee2a716,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760961536267327592,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9751b14-7770-4371-9c4d-4b6fd14d08e7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a65b5d5c37767cd7589c205d7b47d6ef03bea711c7a03da14cfa94a58e567a,PodSandboxId:5f36745d46e5ac5a20613b8f27fbb52f2f26072976d66b5abe844b089e2bde51,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760961514348938794,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6vxgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc997290-0438-4790-bb90-fa014005eff8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d652c31dd5e2b8abb5390317b679a80913ebc2e368e9b19ef6dfdbd2bcf4b16,PodSandboxId:8fccc8014e838524ddce526ac6a6be55905fcaa949cd4603a3bfdfeecb2370da,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760961493649215393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbbf88ad-b99e-4137-96e3-7ea228aab0c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9f20c41dd7051a3e0da6f3f3372c629a392133205063e60502fc3593e78d6a,PodSandboxId:75f00ee67ae15f7c027c6abd2bd28f56231fa618f5e64761c1f711cbbb9b8515,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760961487251707714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xxnb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 387de95e-0fd5-462c-a8b8-ee5618f6d0bd,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9894cef8b26b6c54fc7b9cdc71a5b91abd432c1aec50ed60fceb57908c486c16,PodSandboxId:c20492492a9b7b8df6d5ec27fef7739f1661f99c45495adcc74e9f4955a311d1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760961486738882782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p6h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c76506-3962-4ef0-b263-17a2c091b935,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3a9087ef5c0c7b00bdca9cec721afe7e9885b29c787c285b1b71bcf9b1ec5f,PodSandboxId:07cb796c50072a843af98d4551859fcec6837b3a7341b531e449e71d01635a30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760961475209874982,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec51eadc2e41ef7b50afce7c50b6fb05,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b343307a2f6b940967e6c33f2a028ff7ee13db40232c7002e0cb5bb69b213,PodSandboxId:40d040607d4076fb250b46777d4e635a18cd17835c5343bb812fe77d7b5a216a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760961475215234862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cad1eb7e038be53e7ecc9d7061930026,},Annotations:map[string]st
ring{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73226f4fe386543d72baab2ed44748279e4a66a6effc853dd69f1c5d1e395640,PodSandboxId:9d7382d5cf4d78fec7105ab0ed355d6ac41da0f2d530973d4a8d52b8a3e551ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760961475193861863,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d32bbb1a77f46a11183016030dc12773,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8582bc6e842938cc6a1cca949a36eb484d0549bfb934e6f244d0067bd9c0f96b,PodSandboxId:02473f760f4029c438c276c71ea7da8cba5fcb98ab7985e441938770721a76d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760961475185568848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserv
er,io.kubernetes.pod.name: kube-apiserver-addons-323619,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6fddec4a33b01655e266ab7e5abf6a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e90d30f6-ce89-496d-aa8c-ea772fa28d2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.159483997Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 20 12:03:38 addons-323619 crio[823]: time="2025-10-20 12:03:38.159658859Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03eeb25e61d08       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   b031288ae5421       nginx
	a50560cc1e0cb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   e2f5066fe2ee3       busybox
	f481b3cb9c52d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             2 minutes ago       Running             controller                0                   bd84fcb9290c4       ingress-nginx-controller-675c5ddd98-xcnk9
	6c102e0ea2af6       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     2                   6d90b975af2ab       ingress-nginx-admission-patch-9ngs2
	1fdb71791386f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   1459896dec6c4       ingress-nginx-admission-create-qdjzp
	c235383ff671a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   ece2bf916d780       gadget-mmzsg
	b7af619b89962       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   9b59b5a0e1e6c       kube-ingress-dns-minikube
	e5a65b5d5c377       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   5f36745d46e5a       amd-gpu-device-plugin-6vxgv
	6d652c31dd5e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   8fccc8014e838       storage-provisioner
	cf9f20c41dd70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   75f00ee67ae15       coredns-66bc5c9577-xxnb6
	9894cef8b26b6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   c20492492a9b7       kube-proxy-7p6h8
	6c5b343307a2f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   40d040607d407       kube-scheduler-addons-323619
	2b3a9087ef5c0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   07cb796c50072       kube-controller-manager-addons-323619
	73226f4fe3865       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   9d7382d5cf4d7       etcd-addons-323619
	8582bc6e84293       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   02473f760f402       kube-apiserver-addons-323619
	
	
	==> coredns [cf9f20c41dd7051a3e0da6f3f3372c629a392133205063e60502fc3593e78d6a] <==
	[INFO] 10.244.0.8:60133 - 27029 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000275048s
	[INFO] 10.244.0.8:60133 - 38187 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000113277s
	[INFO] 10.244.0.8:60133 - 20125 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000225214s
	[INFO] 10.244.0.8:60133 - 40604 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009974s
	[INFO] 10.244.0.8:60133 - 32635 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000067966s
	[INFO] 10.244.0.8:60133 - 44025 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000118075s
	[INFO] 10.244.0.8:60133 - 26290 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000171414s
	[INFO] 10.244.0.8:51418 - 59522 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122096s
	[INFO] 10.244.0.8:51418 - 59849 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125137s
	[INFO] 10.244.0.8:46576 - 50104 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170136s
	[INFO] 10.244.0.8:46576 - 49849 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000206469s
	[INFO] 10.244.0.8:56033 - 47620 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068172s
	[INFO] 10.244.0.8:56033 - 48069 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190184s
	[INFO] 10.244.0.8:42058 - 61231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140636s
	[INFO] 10.244.0.8:42058 - 61003 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000211349s
	[INFO] 10.244.0.23:49865 - 48694 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000477032s
	[INFO] 10.244.0.23:58365 - 40313 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011536s
	[INFO] 10.244.0.23:58647 - 46282 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135729s
	[INFO] 10.244.0.23:55660 - 39335 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059793s
	[INFO] 10.244.0.23:60140 - 12094 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083249s
	[INFO] 10.244.0.23:52896 - 10695 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000068457s
	[INFO] 10.244.0.23:40313 - 6192 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.005275517s
	[INFO] 10.244.0.23:52141 - 62352 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00517176s
	[INFO] 10.244.0.27:37338 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000358501s
	[INFO] 10.244.0.27:53098 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164407s
	
	
	==> describe nodes <==
	Name:               addons-323619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-323619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=addons-323619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T11_58_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-323619
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 11:57:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-323619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:03:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:02:05 +0000   Mon, 20 Oct 2025 11:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:02:05 +0000   Mon, 20 Oct 2025 11:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:02:05 +0000   Mon, 20 Oct 2025 11:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:02:05 +0000   Mon, 20 Oct 2025 11:58:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    addons-323619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 89313cb4204544279e99779a8a628312
	  System UUID:                89313cb4-2045-4427-9e99-779a8a628312
	  Boot ID:                    28e936df-f7d1-45ce-8079-f0a4425b9d7d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     hello-world-app-5d498dc89-5f2d9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-mmzsg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xcnk9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m25s
	  kube-system                 amd-gpu-device-plugin-6vxgv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 coredns-66bc5c9577-xxnb6                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m32s
	  kube-system                 etcd-addons-323619                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m39s
	  kube-system                 kube-apiserver-addons-323619                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-addons-323619        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-7p6h8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-addons-323619                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node addons-323619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node addons-323619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node addons-323619 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m38s                  kubelet          Node addons-323619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s                  kubelet          Node addons-323619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s                  kubelet          Node addons-323619 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m37s                  kubelet          Node addons-323619 status is now: NodeReady
	  Normal  RegisteredNode           5m34s                  node-controller  Node addons-323619 event: Registered Node addons-323619 in Controller
	
	
	==> dmesg <==
	[ +13.030005] kauditd_printk_skb: 254 callbacks suppressed
	[ +10.512426] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.742906] kauditd_printk_skb: 38 callbacks suppressed
	[Oct20 11:59] kauditd_printk_skb: 20 callbacks suppressed
	[ +10.450717] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.813229] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.423517] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.997872] kauditd_printk_skb: 140 callbacks suppressed
	[Oct20 12:00] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.251701] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.193722] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.692040] kauditd_printk_skb: 32 callbacks suppressed
	[Oct20 12:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.567040] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.020099] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.234269] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.011471] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.491674] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.409001] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.676733] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.175017] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.262582] kauditd_printk_skb: 37 callbacks suppressed
	[Oct20 12:02] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.864615] kauditd_printk_skb: 41 callbacks suppressed
	[Oct20 12:03] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [73226f4fe386543d72baab2ed44748279e4a66a6effc853dd69f1c5d1e395640] <==
	{"level":"info","ts":"2025-10-20T11:58:49.479755Z","caller":"traceutil/trace.go:172","msg":"trace[499195292] transaction","detail":"{read_only:false; response_revision:976; number_of_response:1; }","duration":"192.862621ms","start":"2025-10-20T11:58:49.286881Z","end":"2025-10-20T11:58:49.479744Z","steps":["trace[499195292] 'process raft request'  (duration: 192.766851ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:59:02.446358Z","caller":"traceutil/trace.go:172","msg":"trace[657956345] linearizableReadLoop","detail":"{readStateIndex:1028; appliedIndex:1028; }","duration":"167.38613ms","start":"2025-10-20T11:59:02.278957Z","end":"2025-10-20T11:59:02.446344Z","steps":["trace[657956345] 'read index received'  (duration: 167.381431ms)","trace[657956345] 'applied index is now lower than readState.Index'  (duration: 4.076µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T11:59:02.446561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.586717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:59:02.446957Z","caller":"traceutil/trace.go:172","msg":"trace[623630662] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:998; }","duration":"167.991714ms","start":"2025-10-20T11:59:02.278954Z","end":"2025-10-20T11:59:02.446945Z","steps":["trace[623630662] 'agreement among raft nodes before linearized reading'  (duration: 167.56557ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:59:02.446563Z","caller":"traceutil/trace.go:172","msg":"trace[2038815522] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"173.201894ms","start":"2025-10-20T11:59:02.273350Z","end":"2025-10-20T11:59:02.446552Z","steps":["trace[2038815522] 'process raft request'  (duration: 173.071427ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:59:02.446811Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.966863ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:59:02.447606Z","caller":"traceutil/trace.go:172","msg":"trace[10555385] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:999; }","duration":"134.765873ms","start":"2025-10-20T11:59:02.312832Z","end":"2025-10-20T11:59:02.447597Z","steps":["trace[10555385] 'agreement among raft nodes before linearized reading'  (duration: 133.955756ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:59:12.880980Z","caller":"traceutil/trace.go:172","msg":"trace[1802820695] linearizableReadLoop","detail":"{readStateIndex:1061; appliedIndex:1061; }","duration":"109.358239ms","start":"2025-10-20T11:59:12.771599Z","end":"2025-10-20T11:59:12.880957Z","steps":["trace[1802820695] 'read index received'  (duration: 109.353482ms)","trace[1802820695] 'applied index is now lower than readState.Index'  (duration: 4.013µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-20T11:59:12.881289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.642979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-10-20T11:59:12.881333Z","caller":"traceutil/trace.go:172","msg":"trace[1858181645] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1029; }","duration":"109.730848ms","start":"2025-10-20T11:59:12.771594Z","end":"2025-10-20T11:59:12.881325Z","steps":["trace[1858181645] 'agreement among raft nodes before linearized reading'  (duration: 109.524323ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:59:12.881662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.483573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:59:12.881688Z","caller":"traceutil/trace.go:172","msg":"trace[1826447492] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1030; }","duration":"100.515429ms","start":"2025-10-20T11:59:12.781165Z","end":"2025-10-20T11:59:12.881681Z","steps":["trace[1826447492] 'agreement among raft nodes before linearized reading'  (duration: 100.468416ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T11:59:12.881873Z","caller":"traceutil/trace.go:172","msg":"trace[1882459235] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"264.760966ms","start":"2025-10-20T11:59:12.617099Z","end":"2025-10-20T11:59:12.881860Z","steps":["trace[1882459235] 'process raft request'  (duration: 264.420422ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:59:27.409589Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.940061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:59:27.409706Z","caller":"traceutil/trace.go:172","msg":"trace[196756339] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1124; }","duration":"222.068676ms","start":"2025-10-20T11:59:27.187623Z","end":"2025-10-20T11:59:27.409692Z","steps":["trace[196756339] 'range keys from in-memory index tree'  (duration: 221.884408ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T11:59:27.410150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.21505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T11:59:27.410638Z","caller":"traceutil/trace.go:172","msg":"trace[1040893620] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1124; }","duration":"131.707026ms","start":"2025-10-20T11:59:27.278916Z","end":"2025-10-20T11:59:27.410623Z","steps":["trace[1040893620] 'range keys from in-memory index tree'  (duration: 131.137749ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:00:03.338710Z","caller":"traceutil/trace.go:172","msg":"trace[1593781916] transaction","detail":"{read_only:false; response_revision:1229; number_of_response:1; }","duration":"100.835914ms","start":"2025-10-20T12:00:03.237859Z","end":"2025-10-20T12:00:03.338695Z","steps":["trace[1593781916] 'process raft request'  (duration: 100.706143ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:00:39.773420Z","caller":"traceutil/trace.go:172","msg":"trace[1392793426] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"136.457395ms","start":"2025-10-20T12:00:39.636949Z","end":"2025-10-20T12:00:39.773406Z","steps":["trace[1392793426] 'process raft request'  (duration: 136.409559ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:00:39.773644Z","caller":"traceutil/trace.go:172","msg":"trace[2112736866] transaction","detail":"{read_only:false; response_revision:1277; number_of_response:1; }","duration":"140.432847ms","start":"2025-10-20T12:00:39.633196Z","end":"2025-10-20T12:00:39.773629Z","steps":["trace[2112736866] 'process raft request'  (duration: 138.465471ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:00:44.219224Z","caller":"traceutil/trace.go:172","msg":"trace[2006560833] linearizableReadLoop","detail":"{readStateIndex:1348; appliedIndex:1348; }","duration":"143.392508ms","start":"2025-10-20T12:00:44.075814Z","end":"2025-10-20T12:00:44.219206Z","steps":["trace[2006560833] 'read index received'  (duration: 143.385149ms)","trace[2006560833] 'applied index is now lower than readState.Index'  (duration: 6.444µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-20T12:00:44.219416Z","caller":"traceutil/trace.go:172","msg":"trace[1809503758] transaction","detail":"{read_only:false; response_revision:1298; number_of_response:1; }","duration":"146.969894ms","start":"2025-10-20T12:00:44.072435Z","end":"2025-10-20T12:00:44.219405Z","steps":["trace[1809503758] 'process raft request'  (duration: 146.795054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-20T12:00:44.221286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.46237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-20T12:00:44.221516Z","caller":"traceutil/trace.go:172","msg":"trace[2013583529] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1298; }","duration":"145.702197ms","start":"2025-10-20T12:00:44.075804Z","end":"2025-10-20T12:00:44.221506Z","steps":["trace[2013583529] 'agreement among raft nodes before linearized reading'  (duration: 143.639317ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-20T12:01:50.403471Z","caller":"traceutil/trace.go:172","msg":"trace[1264256908] transaction","detail":"{read_only:false; response_revision:1731; number_of_response:1; }","duration":"198.940505ms","start":"2025-10-20T12:01:50.204517Z","end":"2025-10-20T12:01:50.403458Z","steps":["trace[1264256908] 'process raft request'  (duration: 198.853184ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:03:38 up 6 min,  0 users,  load average: 0.37, 0.93, 0.55
	Linux addons-323619 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8582bc6e842938cc6a1cca949a36eb484d0549bfb934e6f244d0067bd9c0f96b] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1020 11:59:04.715323       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.165.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.165.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.165.93:443: connect: connection refused" logger="UnhandledError"
	E1020 11:59:04.717598       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.165.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.165.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.165.93:443: connect: connection refused" logger="UnhandledError"
	I1020 11:59:04.819603       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1020 12:00:59.040414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.233:8443->192.168.39.1:48200: use of closed network connection
	E1020 12:00:59.229440       1 conn.go:339] Error on socket receive: read tcp 192.168.39.233:8443->192.168.39.1:48224: use of closed network connection
	I1020 12:01:07.909888       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1020 12:01:08.104111       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.113.182"}
	I1020 12:01:37.509903       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.186.205"}
	E1020 12:01:43.872814       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1020 12:01:58.578245       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1020 12:02:05.730079       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1020 12:02:14.278150       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1020 12:02:14.278191       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1020 12:02:14.308861       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1020 12:02:14.308923       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1020 12:02:14.332108       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1020 12:02:14.332166       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1020 12:02:14.477413       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1020 12:02:14.478133       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1020 12:02:15.303383       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1020 12:02:15.477690       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1020 12:02:15.490298       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1020 12:03:36.796622       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.27.212"}
	
	
	==> kube-controller-manager [2b3a9087ef5c0c7b00bdca9cec721afe7e9885b29c787c285b1b71bcf9b1ec5f] <==
	E1020 12:02:23.864929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:24.073984       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:24.075181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:24.315874       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:24.316876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:30.612154       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:30.613234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:31.674136       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:31.675229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:34.391390       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:34.392459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1020 12:02:35.012522       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1020 12:02:35.012558       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:02:35.034845       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1020 12:02:35.034907       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1020 12:02:45.381802       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:45.382868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:50.710066       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:50.712190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:02:58.744975       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:02:58.747107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:03:22.319776       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:03:22.320841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1020 12:03:37.406337       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1020 12:03:37.407564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9894cef8b26b6c54fc7b9cdc71a5b91abd432c1aec50ed60fceb57908c486c16] <==
	I1020 11:58:07.362593       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 11:58:07.469494       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 11:58:07.469542       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.233"]
	E1020 11:58:07.469621       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 11:58:07.637222       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1020 11:58:07.638172       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1020 11:58:07.638389       1 server_linux.go:132] "Using iptables Proxier"
	I1020 11:58:07.663672       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 11:58:07.666077       1 server.go:527] "Version info" version="v1.34.1"
	I1020 11:58:07.666091       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 11:58:07.673852       1 config.go:200] "Starting service config controller"
	I1020 11:58:07.673864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 11:58:07.673876       1 config.go:106] "Starting endpoint slice config controller"
	I1020 11:58:07.673880       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 11:58:07.674053       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 11:58:07.674059       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 11:58:07.681355       1 config.go:309] "Starting node config controller"
	I1020 11:58:07.681382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 11:58:07.681388       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 11:58:07.774449       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 11:58:07.774487       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 11:58:07.774486       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6c5b343307a2f6b940967e6c33f2a028ff7ee13db40232c7002e0cb5bb69b213] <==
	E1020 11:57:57.988201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 11:57:57.989100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 11:57:57.989157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 11:57:57.989250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 11:57:57.989310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 11:57:57.989363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 11:57:57.989420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 11:57:57.989654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 11:57:57.991106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 11:57:57.991203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 11:57:57.991201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 11:57:57.992868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:57:57.992880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 11:57:57.993063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 11:57:57.993726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 11:57:58.801801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 11:57:58.925595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 11:57:58.926939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 11:57:58.946701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 11:57:58.946795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 11:57:58.995531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 11:57:59.016883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 11:57:59.020695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 11:57:59.159715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1020 11:57:59.584186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:02:17 addons-323619 kubelet[1503]: E1020 12:02:17.331573    1503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e\": container with ID starting with e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e not found: ID does not exist" containerID="e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e"
	Oct 20 12:02:17 addons-323619 kubelet[1503]: I1020 12:02:17.331621    1503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e"} err="failed to get container status \"e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e\": rpc error: code = NotFound desc = could not find container \"e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e\": container with ID starting with e6c3c4eca90de08f3d8ffde8980870e5fb93b7d0503e4f9d90b645bfdddeab4e not found: ID does not exist"
	Oct 20 12:02:17 addons-323619 kubelet[1503]: I1020 12:02:17.331643    1503 scope.go:117] "RemoveContainer" containerID="e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b"
	Oct 20 12:02:17 addons-323619 kubelet[1503]: I1020 12:02:17.445304    1503 scope.go:117] "RemoveContainer" containerID="e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b"
	Oct 20 12:02:17 addons-323619 kubelet[1503]: E1020 12:02:17.446249    1503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b\": container with ID starting with e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b not found: ID does not exist" containerID="e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b"
	Oct 20 12:02:17 addons-323619 kubelet[1503]: I1020 12:02:17.446346    1503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b"} err="failed to get container status \"e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b\": rpc error: code = NotFound desc = could not find container \"e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b\": container with ID starting with e025f67814c027ef290904f8115a15d0902f2439c22ff6da8234ea939e68a16b not found: ID does not exist"
	Oct 20 12:02:20 addons-323619 kubelet[1503]: E1020 12:02:20.964239    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961740963715311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:20 addons-323619 kubelet[1503]: E1020 12:02:20.964265    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961740963715311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:30 addons-323619 kubelet[1503]: E1020 12:02:30.967000    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961750966593417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:30 addons-323619 kubelet[1503]: E1020 12:02:30.967063    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961750966593417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:40 addons-323619 kubelet[1503]: E1020 12:02:40.970484    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961760969912305  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:40 addons-323619 kubelet[1503]: E1020 12:02:40.970509    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961760969912305  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:50 addons-323619 kubelet[1503]: E1020 12:02:50.973484    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961770973142390  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:02:50 addons-323619 kubelet[1503]: E1020 12:02:50.973550    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961770973142390  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:00 addons-323619 kubelet[1503]: E1020 12:03:00.976982    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961780976526645  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:00 addons-323619 kubelet[1503]: E1020 12:03:00.977064    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961780976526645  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:10 addons-323619 kubelet[1503]: E1020 12:03:10.979535    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961790978912965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:10 addons-323619 kubelet[1503]: E1020 12:03:10.979595    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961790978912965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:18 addons-323619 kubelet[1503]: I1020 12:03:18.712346    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6vxgv" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:03:20 addons-323619 kubelet[1503]: E1020 12:03:20.983040    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961800982325239  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:20 addons-323619 kubelet[1503]: E1020 12:03:20.983068    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961800982325239  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:22 addons-323619 kubelet[1503]: I1020 12:03:22.710844    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:03:30 addons-323619 kubelet[1503]: E1020 12:03:30.986180    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760961810985527766  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:30 addons-323619 kubelet[1503]: E1020 12:03:30.986205    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760961810985527766  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 20 12:03:36 addons-323619 kubelet[1503]: I1020 12:03:36.810153    1503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwdj\" (UniqueName: \"kubernetes.io/projected/e92f2cbd-d636-40b5-92d6-d73eba59d923-kube-api-access-bfwdj\") pod \"hello-world-app-5d498dc89-5f2d9\" (UID: \"e92f2cbd-d636-40b5-92d6-d73eba59d923\") " pod="default/hello-world-app-5d498dc89-5f2d9"
	
	
	==> storage-provisioner [6d652c31dd5e2b8abb5390317b679a80913ebc2e368e9b19ef6dfdbd2bcf4b16] <==
	W1020 12:03:12.946421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:14.949676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:14.958529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:16.962530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:16.968440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:18.972371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:18.978269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:20.984151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:20.992452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:22.996582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:23.004467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:25.008631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:25.017296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:27.021199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:27.026041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:29.029554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:29.034942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:31.038619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:31.043906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:33.048415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:33.056905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:35.060246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:35.065597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:37.069997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:03:37.081802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-323619 -n addons-323619
helpers_test.go:269: (dbg) Run:  kubectl --context addons-323619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-5f2d9 ingress-nginx-admission-create-qdjzp ingress-nginx-admission-patch-9ngs2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-323619 describe pod hello-world-app-5d498dc89-5f2d9 ingress-nginx-admission-create-qdjzp ingress-nginx-admission-patch-9ngs2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-323619 describe pod hello-world-app-5d498dc89-5f2d9 ingress-nginx-admission-create-qdjzp ingress-nginx-admission-patch-9ngs2: exit status 1 (70.631216ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-5f2d9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-323619/192.168.39.233
	Start Time:       Mon, 20 Oct 2025 12:03:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bfwdj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bfwdj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-5f2d9 to addons-323619
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdjzp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9ngs2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-323619 describe pod hello-world-app-5d498dc89-5f2d9 ingress-nginx-admission-create-qdjzp ingress-nginx-admission-patch-9ngs2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable ingress-dns --alsologtostderr -v=1: (1.611624032s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable ingress --alsologtostderr -v=1: (7.78465336s)
--- FAIL: TestAddons/parallel/Ingress (161.17s)

                                                
                                    
x
+
TestPreload (158.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344364 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344364 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m31.291111665s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344364 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-344364 image pull gcr.io/k8s-minikube/busybox: (3.788739932s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-344364
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-344364: (6.809058078s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344364 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:53:43.771323  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344364 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.127739944s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344364 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-20 12:54:07.061122002 +0000 UTC m=+3444.231100634
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-344364 -n test-preload-344364
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344364 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-344364 logs -n 25: (1.030409236s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-874962 ssh -n multinode-874962-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ ssh     │ multinode-874962 ssh -n multinode-874962 sudo cat /home/docker/cp-test_multinode-874962-m03_multinode-874962.txt                                                                    │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ cp      │ multinode-874962 cp multinode-874962-m03:/home/docker/cp-test.txt multinode-874962-m02:/home/docker/cp-test_multinode-874962-m03_multinode-874962-m02.txt                           │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ ssh     │ multinode-874962 ssh -n multinode-874962-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ ssh     │ multinode-874962 ssh -n multinode-874962-m02 sudo cat /home/docker/cp-test_multinode-874962-m03_multinode-874962-m02.txt                                                            │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ node    │ multinode-874962 node stop m03                                                                                                                                                      │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:38 UTC │
	│ node    │ multinode-874962 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:38 UTC │ 20 Oct 25 12:41 UTC │
	│ node    │ list -p multinode-874962                                                                                                                                                            │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │                     │
	│ stop    │ -p multinode-874962                                                                                                                                                                 │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:41 UTC │ 20 Oct 25 12:44 UTC │
	│ start   │ -p multinode-874962 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:44 UTC │ 20 Oct 25 12:46 UTC │
	│ node    │ list -p multinode-874962                                                                                                                                                            │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:46 UTC │                     │
	│ node    │ multinode-874962 node delete m03                                                                                                                                                    │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:46 UTC │ 20 Oct 25 12:46 UTC │
	│ stop    │ multinode-874962 stop                                                                                                                                                               │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:46 UTC │ 20 Oct 25 12:49 UTC │
	│ start   │ -p multinode-874962 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:49 UTC │ 20 Oct 25 12:50 UTC │
	│ node    │ list -p multinode-874962                                                                                                                                                            │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:50 UTC │                     │
	│ start   │ -p multinode-874962-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-874962-m02 │ jenkins │ v1.37.0 │ 20 Oct 25 12:50 UTC │                     │
	│ start   │ -p multinode-874962-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-874962-m03 │ jenkins │ v1.37.0 │ 20 Oct 25 12:50 UTC │ 20 Oct 25 12:51 UTC │
	│ node    │ add -p multinode-874962                                                                                                                                                             │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:51 UTC │                     │
	│ delete  │ -p multinode-874962-m03                                                                                                                                                             │ multinode-874962-m03 │ jenkins │ v1.37.0 │ 20 Oct 25 12:51 UTC │ 20 Oct 25 12:51 UTC │
	│ delete  │ -p multinode-874962                                                                                                                                                                 │ multinode-874962     │ jenkins │ v1.37.0 │ 20 Oct 25 12:51 UTC │ 20 Oct 25 12:51 UTC │
	│ start   │ -p test-preload-344364 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-344364  │ jenkins │ v1.37.0 │ 20 Oct 25 12:51 UTC │ 20 Oct 25 12:53 UTC │
	│ image   │ test-preload-344364 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-344364  │ jenkins │ v1.37.0 │ 20 Oct 25 12:53 UTC │ 20 Oct 25 12:53 UTC │
	│ stop    │ -p test-preload-344364                                                                                                                                                              │ test-preload-344364  │ jenkins │ v1.37.0 │ 20 Oct 25 12:53 UTC │ 20 Oct 25 12:53 UTC │
	│ start   │ -p test-preload-344364 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-344364  │ jenkins │ v1.37.0 │ 20 Oct 25 12:53 UTC │ 20 Oct 25 12:54 UTC │
	│ image   │ test-preload-344364 image list                                                                                                                                                      │ test-preload-344364  │ jenkins │ v1.37.0 │ 20 Oct 25 12:54 UTC │ 20 Oct 25 12:54 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:53:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:53:12.755114  174604 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:53:12.755222  174604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:53:12.755230  174604 out.go:374] Setting ErrFile to fd 2...
	I1020 12:53:12.755234  174604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:53:12.755442  174604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:53:12.755856  174604 out.go:368] Setting JSON to false
	I1020 12:53:12.756686  174604 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5728,"bootTime":1760959065,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:53:12.756775  174604 start.go:141] virtualization: kvm guest
	I1020 12:53:12.758742  174604 out.go:179] * [test-preload-344364] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:53:12.759863  174604 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:53:12.759896  174604 notify.go:220] Checking for updates...
	I1020 12:53:12.761823  174604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:53:12.763283  174604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:53:12.764354  174604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 12:53:12.765435  174604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:53:12.766413  174604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:53:12.767741  174604 config.go:182] Loaded profile config "test-preload-344364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1020 12:53:12.768147  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:12.768237  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:12.782192  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1020 12:53:12.782651  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:12.783228  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:12.783261  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:12.783668  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:12.783894  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:12.785362  174604 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1020 12:53:12.786507  174604 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:53:12.786831  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:12.786876  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:12.800433  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I1020 12:53:12.800817  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:12.801216  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:12.801231  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:12.801551  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:12.801741  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:12.832923  174604 out.go:179] * Using the kvm2 driver based on existing profile
	I1020 12:53:12.833765  174604 start.go:305] selected driver: kvm2
	I1020 12:53:12.833776  174604 start.go:925] validating driver "kvm2" against &{Name:test-preload-344364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-344364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:53:12.833917  174604 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:53:12.834585  174604 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:53:12.834656  174604 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 12:53:12.847055  174604 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 12:53:12.847075  174604 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 12:53:12.859116  174604 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 12:53:12.859469  174604 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:53:12.859501  174604 cni.go:84] Creating CNI manager for ""
	I1020 12:53:12.859548  174604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 12:53:12.859599  174604 start.go:349] cluster config:
	{Name:test-preload-344364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-344364 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:53:12.859696  174604 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:53:12.861116  174604 out.go:179] * Starting "test-preload-344364" primary control-plane node in "test-preload-344364" cluster
	I1020 12:53:12.862195  174604 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1020 12:53:12.963443  174604 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1020 12:53:12.963469  174604 cache.go:58] Caching tarball of preloaded images
	I1020 12:53:12.963633  174604 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1020 12:53:12.965190  174604 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1020 12:53:12.966247  174604 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1020 12:53:13.077835  174604 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1020 12:53:13.077877  174604 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1020 12:53:23.464574  174604 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1020 12:53:23.464722  174604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/config.json ...
	I1020 12:53:23.464958  174604 start.go:360] acquireMachinesLock for test-preload-344364: {Name:mk7379f3db3d78bd88fb45ecf1a2b8c8492f1da9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1020 12:53:23.465026  174604 start.go:364] duration metric: took 44.935µs to acquireMachinesLock for "test-preload-344364"
	I1020 12:53:23.465043  174604 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:53:23.465049  174604 fix.go:54] fixHost starting: 
	I1020 12:53:23.465352  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:23.465395  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:23.478994  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I1020 12:53:23.479480  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:23.479981  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:23.480008  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:23.480433  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:23.480683  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:23.480874  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetState
	I1020 12:53:23.482769  174604 fix.go:112] recreateIfNeeded on test-preload-344364: state=Stopped err=<nil>
	I1020 12:53:23.482794  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	W1020 12:53:23.482961  174604 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:53:23.484733  174604 out.go:252] * Restarting existing kvm2 VM for "test-preload-344364" ...
	I1020 12:53:23.484758  174604 main.go:141] libmachine: (test-preload-344364) Calling .Start
	I1020 12:53:23.484903  174604 main.go:141] libmachine: (test-preload-344364) starting domain...
	I1020 12:53:23.484920  174604 main.go:141] libmachine: (test-preload-344364) ensuring networks are active...
	I1020 12:53:23.485703  174604 main.go:141] libmachine: (test-preload-344364) Ensuring network default is active
	I1020 12:53:23.486033  174604 main.go:141] libmachine: (test-preload-344364) Ensuring network mk-test-preload-344364 is active
	I1020 12:53:23.486455  174604 main.go:141] libmachine: (test-preload-344364) getting domain XML...
	I1020 12:53:23.487561  174604 main.go:141] libmachine: (test-preload-344364) DBG | starting domain XML:
	I1020 12:53:23.487582  174604 main.go:141] libmachine: (test-preload-344364) DBG | <domain type='kvm'>
	I1020 12:53:23.487599  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <name>test-preload-344364</name>
	I1020 12:53:23.487612  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <uuid>952c3f2a-ab07-4853-89b7-76941170d1fc</uuid>
	I1020 12:53:23.487682  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <memory unit='KiB'>3145728</memory>
	I1020 12:53:23.487699  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1020 12:53:23.487723  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <vcpu placement='static'>2</vcpu>
	I1020 12:53:23.487738  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <os>
	I1020 12:53:23.487749  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1020 12:53:23.487757  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <boot dev='cdrom'/>
	I1020 12:53:23.487770  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <boot dev='hd'/>
	I1020 12:53:23.487781  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <bootmenu enable='no'/>
	I1020 12:53:23.487789  174604 main.go:141] libmachine: (test-preload-344364) DBG |   </os>
	I1020 12:53:23.487799  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <features>
	I1020 12:53:23.487824  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <acpi/>
	I1020 12:53:23.487848  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <apic/>
	I1020 12:53:23.487858  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <pae/>
	I1020 12:53:23.487866  174604 main.go:141] libmachine: (test-preload-344364) DBG |   </features>
	I1020 12:53:23.487877  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1020 12:53:23.487885  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <clock offset='utc'/>
	I1020 12:53:23.487896  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <on_poweroff>destroy</on_poweroff>
	I1020 12:53:23.487912  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <on_reboot>restart</on_reboot>
	I1020 12:53:23.487929  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <on_crash>destroy</on_crash>
	I1020 12:53:23.487942  174604 main.go:141] libmachine: (test-preload-344364) DBG |   <devices>
	I1020 12:53:23.487954  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1020 12:53:23.487964  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <disk type='file' device='cdrom'>
	I1020 12:53:23.487983  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <driver name='qemu' type='raw'/>
	I1020 12:53:23.488001  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/boot2docker.iso'/>
	I1020 12:53:23.488016  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <target dev='hdc' bus='scsi'/>
	I1020 12:53:23.488041  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <readonly/>
	I1020 12:53:23.488064  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1020 12:53:23.488075  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </disk>
	I1020 12:53:23.488084  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <disk type='file' device='disk'>
	I1020 12:53:23.488095  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1020 12:53:23.488105  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <source file='/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/test-preload-344364.rawdisk'/>
	I1020 12:53:23.488116  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <target dev='hda' bus='virtio'/>
	I1020 12:53:23.488121  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1020 12:53:23.488139  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </disk>
	I1020 12:53:23.488158  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1020 12:53:23.488171  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1020 12:53:23.488180  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </controller>
	I1020 12:53:23.488191  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1020 12:53:23.488203  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1020 12:53:23.488217  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1020 12:53:23.488229  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </controller>
	I1020 12:53:23.488241  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <interface type='network'>
	I1020 12:53:23.488253  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <mac address='52:54:00:c7:ca:1b'/>
	I1020 12:53:23.488260  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <source network='mk-test-preload-344364'/>
	I1020 12:53:23.488269  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <model type='virtio'/>
	I1020 12:53:23.488281  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1020 12:53:23.488295  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </interface>
	I1020 12:53:23.488308  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <interface type='network'>
	I1020 12:53:23.488319  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <mac address='52:54:00:46:3d:f9'/>
	I1020 12:53:23.488342  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <source network='default'/>
	I1020 12:53:23.488350  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <model type='virtio'/>
	I1020 12:53:23.488366  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1020 12:53:23.488383  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </interface>
	I1020 12:53:23.488395  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <serial type='pty'>
	I1020 12:53:23.488421  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <target type='isa-serial' port='0'>
	I1020 12:53:23.488432  174604 main.go:141] libmachine: (test-preload-344364) DBG |         <model name='isa-serial'/>
	I1020 12:53:23.488442  174604 main.go:141] libmachine: (test-preload-344364) DBG |       </target>
	I1020 12:53:23.488450  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </serial>
	I1020 12:53:23.488464  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <console type='pty'>
	I1020 12:53:23.488476  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <target type='serial' port='0'/>
	I1020 12:53:23.488488  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </console>
	I1020 12:53:23.488500  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <input type='mouse' bus='ps2'/>
	I1020 12:53:23.488511  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <input type='keyboard' bus='ps2'/>
	I1020 12:53:23.488521  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <audio id='1' type='none'/>
	I1020 12:53:23.488531  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <memballoon model='virtio'>
	I1020 12:53:23.488545  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1020 12:53:23.488559  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </memballoon>
	I1020 12:53:23.488568  174604 main.go:141] libmachine: (test-preload-344364) DBG |     <rng model='virtio'>
	I1020 12:53:23.488581  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <backend model='random'>/dev/random</backend>
	I1020 12:53:23.488596  174604 main.go:141] libmachine: (test-preload-344364) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1020 12:53:23.488606  174604 main.go:141] libmachine: (test-preload-344364) DBG |     </rng>
	I1020 12:53:23.488614  174604 main.go:141] libmachine: (test-preload-344364) DBG |   </devices>
	I1020 12:53:23.488628  174604 main.go:141] libmachine: (test-preload-344364) DBG | </domain>
	I1020 12:53:23.488640  174604 main.go:141] libmachine: (test-preload-344364) DBG | 
	I1020 12:53:24.768386  174604 main.go:141] libmachine: (test-preload-344364) waiting for domain to start...
	I1020 12:53:24.769845  174604 main.go:141] libmachine: (test-preload-344364) domain is now running
	I1020 12:53:24.769875  174604 main.go:141] libmachine: (test-preload-344364) waiting for IP...
	I1020 12:53:24.770688  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:24.771236  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has current primary IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:24.771256  174604 main.go:141] libmachine: (test-preload-344364) found domain IP: 192.168.39.56
	I1020 12:53:24.771269  174604 main.go:141] libmachine: (test-preload-344364) reserving static IP address...
	I1020 12:53:24.771740  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "test-preload-344364", mac: "52:54:00:c7:ca:1b", ip: "192.168.39.56"} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:51:46 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:24.771809  174604 main.go:141] libmachine: (test-preload-344364) DBG | skip adding static IP to network mk-test-preload-344364 - found existing host DHCP lease matching {name: "test-preload-344364", mac: "52:54:00:c7:ca:1b", ip: "192.168.39.56"}
	I1020 12:53:24.771841  174604 main.go:141] libmachine: (test-preload-344364) reserved static IP address 192.168.39.56 for domain test-preload-344364
	I1020 12:53:24.771863  174604 main.go:141] libmachine: (test-preload-344364) waiting for SSH...
	I1020 12:53:24.771879  174604 main.go:141] libmachine: (test-preload-344364) DBG | Getting to WaitForSSH function...
	I1020 12:53:24.774127  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:24.774466  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:51:46 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:24.774507  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:24.774644  174604 main.go:141] libmachine: (test-preload-344364) DBG | Using SSH client type: external
	I1020 12:53:24.774672  174604 main.go:141] libmachine: (test-preload-344364) DBG | Using SSH private key: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa (-rw-------)
	I1020 12:53:24.774708  174604 main.go:141] libmachine: (test-preload-344364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1020 12:53:24.774735  174604 main.go:141] libmachine: (test-preload-344364) DBG | About to run SSH command:
	I1020 12:53:24.774768  174604 main.go:141] libmachine: (test-preload-344364) DBG | exit 0
	I1020 12:53:35.061142  174604 main.go:141] libmachine: (test-preload-344364) DBG | SSH cmd err, output: exit status 255: 
	I1020 12:53:35.061166  174604 main.go:141] libmachine: (test-preload-344364) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1020 12:53:35.061222  174604 main.go:141] libmachine: (test-preload-344364) DBG | command : exit 0
	I1020 12:53:35.061254  174604 main.go:141] libmachine: (test-preload-344364) DBG | err     : exit status 255
	I1020 12:53:35.061282  174604 main.go:141] libmachine: (test-preload-344364) DBG | output  : 
	I1020 12:53:38.063324  174604 main.go:141] libmachine: (test-preload-344364) DBG | Getting to WaitForSSH function...
	I1020 12:53:38.066078  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.066539  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.066570  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.066719  174604 main.go:141] libmachine: (test-preload-344364) DBG | Using SSH client type: external
	I1020 12:53:38.066778  174604 main.go:141] libmachine: (test-preload-344364) DBG | Using SSH private key: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa (-rw-------)
	I1020 12:53:38.066826  174604 main.go:141] libmachine: (test-preload-344364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1020 12:53:38.066843  174604 main.go:141] libmachine: (test-preload-344364) DBG | About to run SSH command:
	I1020 12:53:38.066866  174604 main.go:141] libmachine: (test-preload-344364) DBG | exit 0
	I1020 12:53:38.196649  174604 main.go:141] libmachine: (test-preload-344364) DBG | SSH cmd err, output: <nil>: 
	I1020 12:53:38.197077  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetConfigRaw
	I1020 12:53:38.197703  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetIP
	I1020 12:53:38.200840  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.201311  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.201346  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.201700  174604 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/config.json ...
	I1020 12:53:38.201937  174604 machine.go:93] provisionDockerMachine start ...
	I1020 12:53:38.201962  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:38.202178  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.204727  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.205189  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.205221  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.205323  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:38.205563  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.205725  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.205913  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:38.206085  174604 main.go:141] libmachine: Using SSH client type: native
	I1020 12:53:38.206313  174604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1020 12:53:38.206325  174604 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:53:38.314392  174604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1020 12:53:38.314443  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetMachineName
	I1020 12:53:38.314675  174604 buildroot.go:166] provisioning hostname "test-preload-344364"
	I1020 12:53:38.314709  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetMachineName
	I1020 12:53:38.314917  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.317985  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.318431  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.318459  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.318632  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:38.318804  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.318952  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.319075  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:38.319217  174604 main.go:141] libmachine: Using SSH client type: native
	I1020 12:53:38.319441  174604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1020 12:53:38.319454  174604 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-344364 && echo "test-preload-344364" | sudo tee /etc/hostname
	I1020 12:53:38.442753  174604 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-344364
	
	I1020 12:53:38.442786  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.446054  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.446518  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.446543  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.446747  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:38.446956  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.447130  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.447296  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:38.447497  174604 main.go:141] libmachine: Using SSH client type: native
	I1020 12:53:38.447695  174604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1020 12:53:38.447713  174604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-344364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-344364/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-344364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:53:38.564973  174604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:53:38.565003  174604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21773-139101/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-139101/.minikube}
	I1020 12:53:38.565020  174604 buildroot.go:174] setting up certificates
	I1020 12:53:38.565028  174604 provision.go:84] configureAuth start
	I1020 12:53:38.565036  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetMachineName
	I1020 12:53:38.565307  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetIP
	I1020 12:53:38.568378  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.568797  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.568826  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.568974  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.571564  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.571941  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.571968  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.572146  174604 provision.go:143] copyHostCerts
	I1020 12:53:38.572209  174604 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem, removing ...
	I1020 12:53:38.572226  174604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem
	I1020 12:53:38.572296  174604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem (1082 bytes)
	I1020 12:53:38.572450  174604 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem, removing ...
	I1020 12:53:38.572470  174604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem
	I1020 12:53:38.572509  174604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem (1123 bytes)
	I1020 12:53:38.572590  174604 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem, removing ...
	I1020 12:53:38.572597  174604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem
	I1020 12:53:38.572621  174604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem (1675 bytes)
	I1020 12:53:38.572684  174604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem org=jenkins.test-preload-344364 san=[127.0.0.1 192.168.39.56 localhost minikube test-preload-344364]
	I1020 12:53:38.622333  174604 provision.go:177] copyRemoteCerts
	I1020 12:53:38.622387  174604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:53:38.622416  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.624654  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.624972  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.625008  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.625170  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:38.625342  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.625500  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:38.625620  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:38.709448  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1020 12:53:38.736442  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:53:38.764359  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 12:53:38.792558  174604 provision.go:87] duration metric: took 227.511312ms to configureAuth
	I1020 12:53:38.792601  174604 buildroot.go:189] setting minikube options for container-runtime
	I1020 12:53:38.792800  174604 config.go:182] Loaded profile config "test-preload-344364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1020 12:53:38.792891  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:38.796005  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.796369  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:38.796428  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:38.796599  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:38.796828  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.797001  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:38.797151  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:38.797343  174604 main.go:141] libmachine: Using SSH client type: native
	I1020 12:53:38.797644  174604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1020 12:53:38.797662  174604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:53:39.040504  174604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:53:39.040546  174604 machine.go:96] duration metric: took 838.589517ms to provisionDockerMachine
	I1020 12:53:39.040568  174604 start.go:293] postStartSetup for "test-preload-344364" (driver="kvm2")
	I1020 12:53:39.040586  174604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:53:39.040630  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:39.041018  174604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:53:39.041047  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:39.044114  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.044520  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:39.044553  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.044766  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:39.044933  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:39.045096  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:39.045294  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:39.131037  174604 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:53:39.135466  174604 info.go:137] Remote host: Buildroot 2025.02
	I1020 12:53:39.135493  174604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/addons for local assets ...
	I1020 12:53:39.135569  174604 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/files for local assets ...
	I1020 12:53:39.135664  174604 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem -> 1431312.pem in /etc/ssl/certs
	I1020 12:53:39.135807  174604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 12:53:39.146442  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 12:53:39.174327  174604 start.go:296] duration metric: took 133.73776ms for postStartSetup
	I1020 12:53:39.174373  174604 fix.go:56] duration metric: took 15.709324563s for fixHost
	I1020 12:53:39.174395  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:39.177294  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.177741  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:39.177763  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.177971  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:39.178185  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:39.178336  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:39.178486  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:39.178689  174604 main.go:141] libmachine: Using SSH client type: native
	I1020 12:53:39.178919  174604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1020 12:53:39.178930  174604 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1020 12:53:39.288862  174604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760964819.252258765
	
	I1020 12:53:39.288889  174604 fix.go:216] guest clock: 1760964819.252258765
	I1020 12:53:39.288899  174604 fix.go:229] Guest: 2025-10-20 12:53:39.252258765 +0000 UTC Remote: 2025-10-20 12:53:39.174377442 +0000 UTC m=+26.456244112 (delta=77.881323ms)
	I1020 12:53:39.288966  174604 fix.go:200] guest clock delta is within tolerance: 77.881323ms
	I1020 12:53:39.288978  174604 start.go:83] releasing machines lock for "test-preload-344364", held for 15.823939625s
	I1020 12:53:39.289010  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:39.289327  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetIP
	I1020 12:53:39.291996  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.292475  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:39.292498  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.292685  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:39.293179  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:39.293370  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:39.293491  174604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:53:39.293544  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:39.293600  174604 ssh_runner.go:195] Run: cat /version.json
	I1020 12:53:39.293629  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:39.296736  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.297026  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.297208  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:39.297235  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.297506  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:39.297679  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:39.297704  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:39.297755  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:39.297897  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:39.297897  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:39.298142  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:39.298160  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:39.298457  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:39.298624  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:39.398012  174604 ssh_runner.go:195] Run: systemctl --version
	I1020 12:53:39.404048  174604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:53:39.545325  174604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:53:39.551999  174604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:53:39.552068  174604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:53:39.570405  174604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1020 12:53:39.570428  174604 start.go:495] detecting cgroup driver to use...
	I1020 12:53:39.570497  174604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:53:39.589522  174604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:53:39.605343  174604 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:53:39.605432  174604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:53:39.621569  174604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:53:39.636887  174604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:53:39.771967  174604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:53:39.980863  174604 docker.go:234] disabling docker service ...
	I1020 12:53:39.980951  174604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:53:39.996840  174604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:53:40.011114  174604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:53:40.160364  174604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:53:40.296628  174604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:53:40.311543  174604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:53:40.333450  174604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1020 12:53:40.333532  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.345162  174604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 12:53:40.345251  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.356608  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.367909  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.379045  174604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:53:40.391067  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.402216  174604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.420901  174604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:53:40.432060  174604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:53:40.441666  174604 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1020 12:53:40.441715  174604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1020 12:53:40.459664  174604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:53:40.469841  174604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:53:40.598459  174604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:53:40.706141  174604 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:53:40.706209  174604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:53:40.711335  174604 start.go:563] Will wait 60s for crictl version
	I1020 12:53:40.711416  174604 ssh_runner.go:195] Run: which crictl
	I1020 12:53:40.715120  174604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1020 12:53:40.754967  174604 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1020 12:53:40.755057  174604 ssh_runner.go:195] Run: crio --version
	I1020 12:53:40.782213  174604 ssh_runner.go:195] Run: crio --version
	I1020 12:53:40.811357  174604 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1020 12:53:40.812270  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetIP
	I1020 12:53:40.815247  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:40.815664  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:40.815692  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:40.815936  174604 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1020 12:53:40.819963  174604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:53:40.834212  174604 kubeadm.go:883] updating cluster {Name:test-preload-344364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-344364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:53:40.834326  174604 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1020 12:53:40.834379  174604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:53:40.868825  174604 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1020 12:53:40.868908  174604 ssh_runner.go:195] Run: which lz4
	I1020 12:53:40.873019  174604 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1020 12:53:40.879437  174604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1020 12:53:40.879472  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1020 12:53:42.289486  174604 crio.go:462] duration metric: took 1.416501477s to copy over tarball
	I1020 12:53:42.289564  174604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1020 12:53:43.830129  174604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.540526607s)
	I1020 12:53:43.830169  174604 crio.go:469] duration metric: took 1.540652915s to extract the tarball
	I1020 12:53:43.830191  174604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1020 12:53:43.870271  174604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:53:43.917338  174604 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:53:43.917365  174604 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:53:43.917374  174604 kubeadm.go:934] updating node { 192.168.39.56 8443 v1.32.0 crio true true} ...
	I1020 12:53:43.917496  174604 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-344364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-344364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:53:43.917561  174604 ssh_runner.go:195] Run: crio config
	I1020 12:53:43.964089  174604 cni.go:84] Creating CNI manager for ""
	I1020 12:53:43.964118  174604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 12:53:43.964155  174604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:53:43.964185  174604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-344364 NodeName:test-preload-344364 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:53:43.964338  174604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-344364"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.56"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:53:43.964441  174604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1020 12:53:43.976269  174604 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:53:43.976349  174604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:53:43.987235  174604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1020 12:53:44.006737  174604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:53:44.026545  174604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1020 12:53:44.046017  174604 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1020 12:53:44.050126  174604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:53:44.063501  174604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:53:44.200725  174604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:53:44.238246  174604 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364 for IP: 192.168.39.56
	I1020 12:53:44.238270  174604 certs.go:195] generating shared ca certs ...
	I1020 12:53:44.238288  174604 certs.go:227] acquiring lock for ca certs: {Name:mk4d0d22cc1ac40184675be8ad2f5fa8f1c0ffc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:53:44.238491  174604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key
	I1020 12:53:44.238536  174604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key
	I1020 12:53:44.238544  174604 certs.go:257] generating profile certs ...
	I1020 12:53:44.238634  174604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.key
	I1020 12:53:44.238692  174604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/apiserver.key.63578150
	I1020 12:53:44.238730  174604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/proxy-client.key
	I1020 12:53:44.238868  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem (1338 bytes)
	W1020 12:53:44.238905  174604 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131_empty.pem, impossibly tiny 0 bytes
	I1020 12:53:44.238912  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 12:53:44.238933  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem (1082 bytes)
	I1020 12:53:44.238954  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:53:44.238974  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem (1675 bytes)
	I1020 12:53:44.239011  174604 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 12:53:44.239604  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:53:44.276074  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1020 12:53:44.307218  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:53:44.335126  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 12:53:44.362932  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 12:53:44.390291  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 12:53:44.417512  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:53:44.445096  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:53:44.472795  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /usr/share/ca-certificates/1431312.pem (1708 bytes)
	I1020 12:53:44.505179  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:53:44.532148  174604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem --> /usr/share/ca-certificates/143131.pem (1338 bytes)
	I1020 12:53:44.559338  174604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:53:44.579209  174604 ssh_runner.go:195] Run: openssl version
	I1020 12:53:44.585134  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1431312.pem && ln -fs /usr/share/ca-certificates/1431312.pem /etc/ssl/certs/1431312.pem"
	I1020 12:53:44.597170  174604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1431312.pem
	I1020 12:53:44.602070  174604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:06 /usr/share/ca-certificates/1431312.pem
	I1020 12:53:44.602130  174604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1431312.pem
	I1020 12:53:44.609453  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1431312.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:53:44.621583  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:53:44.633630  174604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:53:44.638569  174604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:53:44.638639  174604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:53:44.645624  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:53:44.658271  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143131.pem && ln -fs /usr/share/ca-certificates/143131.pem /etc/ssl/certs/143131.pem"
	I1020 12:53:44.671022  174604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143131.pem
	I1020 12:53:44.676010  174604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:06 /usr/share/ca-certificates/143131.pem
	I1020 12:53:44.676071  174604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143131.pem
	I1020 12:53:44.682951  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143131.pem /etc/ssl/certs/51391683.0"
	I1020 12:53:44.695980  174604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:53:44.701070  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:53:44.708264  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:53:44.715396  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:53:44.722921  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:53:44.730046  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:53:44.736944  174604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:53:44.743895  174604 kubeadm.go:400] StartCluster: {Name:test-preload-344364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-344364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:53:44.744026  174604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:53:44.744089  174604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:53:44.782900  174604 cri.go:89] found id: ""
	I1020 12:53:44.782985  174604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:53:44.795943  174604 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:53:44.795973  174604 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:53:44.796033  174604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:53:44.807572  174604 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:53:44.808108  174604 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-344364" does not appear in /home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:53:44.808239  174604 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-139101/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-344364" cluster setting kubeconfig missing "test-preload-344364" context setting]
	I1020 12:53:44.808523  174604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/kubeconfig: {Name:mkf6907ead759546580f2340b9e9b6432a1cd822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:53:44.809078  174604 kapi.go:59] client config for test-preload-344364: &rest.Config{Host:"https://192.168.39.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.key", CAFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:53:44.809556  174604 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1020 12:53:44.809572  174604 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1020 12:53:44.809576  174604 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1020 12:53:44.809580  174604 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1020 12:53:44.809584  174604 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1020 12:53:44.809978  174604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:53:44.821190  174604 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.56
	I1020 12:53:44.821231  174604 kubeadm.go:1160] stopping kube-system containers ...
	I1020 12:53:44.821244  174604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1020 12:53:44.821293  174604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:53:44.859909  174604 cri.go:89] found id: ""
	I1020 12:53:44.859996  174604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1020 12:53:44.878775  174604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:53:44.891117  174604 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:53:44.891161  174604 kubeadm.go:157] found existing configuration files:
	
	I1020 12:53:44.891212  174604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:53:44.902019  174604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:53:44.902088  174604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:53:44.913608  174604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:53:44.924467  174604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:53:44.924550  174604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:53:44.936553  174604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:53:44.947633  174604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:53:44.947701  174604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:53:44.959381  174604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:53:44.970494  174604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:53:44.970563  174604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:53:44.982274  174604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:53:44.993834  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:45.045119  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:46.035065  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:46.270317  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:46.332621  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:46.411335  174604 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:53:46.411425  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:46.912316  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:47.411566  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:47.911716  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:48.411551  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:48.911872  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:48.948191  174604 api_server.go:72] duration metric: took 2.536876333s to wait for apiserver process to appear ...
	I1020 12:53:48.948226  174604 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:53:48.948251  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:51.785341  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:53:51.785373  174604 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:53:51.785395  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:51.895678  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:53:51.895718  174604 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:53:51.949025  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:51.953714  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:53:51.953742  174604 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:53:52.448377  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:52.453358  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:53:52.453384  174604 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:53:52.948753  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:52.956207  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:53:52.956238  174604 api_server.go:103] status: https://192.168.39.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:53:53.448500  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:53.453108  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1020 12:53:53.459727  174604 api_server.go:141] control plane version: v1.32.0
	I1020 12:53:53.459765  174604 api_server.go:131] duration metric: took 4.511531627s to wait for apiserver health ...
	I1020 12:53:53.459777  174604 cni.go:84] Creating CNI manager for ""
	I1020 12:53:53.459784  174604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 12:53:53.461341  174604 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1020 12:53:53.462384  174604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1020 12:53:53.478431  174604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1020 12:53:53.511480  174604 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:53:53.516367  174604 system_pods.go:59] 7 kube-system pods found
	I1020 12:53:53.516432  174604 system_pods.go:61] "coredns-668d6bf9bc-zhgl4" [66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:53:53.516442  174604 system_pods.go:61] "etcd-test-preload-344364" [428bc9d4-1b4e-4f2d-ba0a-35a04ba83fa4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:53:53.516453  174604 system_pods.go:61] "kube-apiserver-test-preload-344364" [ec479eef-002b-4ac1-8369-d2415ea28748] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:53:53.516459  174604 system_pods.go:61] "kube-controller-manager-test-preload-344364" [32883d91-dadd-4ec9-9d53-7706052f9c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:53:53.516463  174604 system_pods.go:61] "kube-proxy-l4s2d" [1db2cc76-14f7-425f-b539-04059baa8975] Running
	I1020 12:53:53.516469  174604 system_pods.go:61] "kube-scheduler-test-preload-344364" [2def8647-3dc9-48ee-8c2e-b02fc602c427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:53:53.516474  174604 system_pods.go:61] "storage-provisioner" [1b77c5e2-3352-46dd-90e2-d0a59bf09337] Running
	I1020 12:53:53.516481  174604 system_pods.go:74] duration metric: took 4.971359ms to wait for pod list to return data ...
	I1020 12:53:53.516490  174604 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:53:53.520385  174604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1020 12:53:53.520425  174604 node_conditions.go:123] node cpu capacity is 2
	I1020 12:53:53.520438  174604 node_conditions.go:105] duration metric: took 3.944088ms to run NodePressure ...
	I1020 12:53:53.520497  174604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:53:53.785470  174604 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1020 12:53:53.789051  174604 kubeadm.go:743] kubelet initialised
	I1020 12:53:53.789071  174604 kubeadm.go:744] duration metric: took 3.574588ms waiting for restarted kubelet to initialise ...
	I1020 12:53:53.789087  174604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:53:53.809546  174604 ops.go:34] apiserver oom_adj: -16
	I1020 12:53:53.809571  174604 kubeadm.go:601] duration metric: took 9.013591554s to restartPrimaryControlPlane
	I1020 12:53:53.809581  174604 kubeadm.go:402] duration metric: took 9.06569993s to StartCluster
	I1020 12:53:53.809601  174604 settings.go:142] acquiring lock: {Name:mka845ade6dad629b08aff076fd014e4b2afad9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:53:53.809679  174604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:53:53.810296  174604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-139101/kubeconfig: {Name:mkf6907ead759546580f2340b9e9b6432a1cd822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:53:53.810595  174604 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.56 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:53:53.810686  174604 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:53:53.810808  174604 addons.go:69] Setting storage-provisioner=true in profile "test-preload-344364"
	I1020 12:53:53.810835  174604 addons.go:238] Setting addon storage-provisioner=true in "test-preload-344364"
	W1020 12:53:53.810844  174604 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:53:53.810854  174604 addons.go:69] Setting default-storageclass=true in profile "test-preload-344364"
	I1020 12:53:53.810880  174604 host.go:66] Checking if "test-preload-344364" exists ...
	I1020 12:53:53.810884  174604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-344364"
	I1020 12:53:53.810899  174604 config.go:182] Loaded profile config "test-preload-344364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1020 12:53:53.811294  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:53.811343  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:53.811347  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:53.811396  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:53.812166  174604 out.go:179] * Verifying Kubernetes components...
	I1020 12:53:53.813279  174604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:53:53.825526  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41151
	I1020 12:53:53.825667  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I1020 12:53:53.826018  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:53.826051  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:53.826532  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:53.826533  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:53.826559  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:53.826567  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:53.827031  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:53.827039  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:53.827255  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetState
	I1020 12:53:53.827663  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:53.827712  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:53.829929  174604 kapi.go:59] client config for test-preload-344364: &rest.Config{Host:"https://192.168.39.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.key", CAFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:53:53.830321  174604 addons.go:238] Setting addon default-storageclass=true in "test-preload-344364"
	W1020 12:53:53.830345  174604 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:53:53.830378  174604 host.go:66] Checking if "test-preload-344364" exists ...
	I1020 12:53:53.830790  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:53.830842  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:53.842340  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I1020 12:53:53.842830  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:53.843289  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:53.843313  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:53.843810  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:53.844038  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetState
	I1020 12:53:53.844556  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35593
	I1020 12:53:53.845005  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:53.845463  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:53.845495  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:53.845887  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:53.846431  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:53.846502  174604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:53:53.846549  174604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:53:53.850508  174604 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:53:53.851581  174604 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:53:53.851599  174604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:53:53.851617  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:53.855504  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:53.856107  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:53.856141  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:53.856380  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:53.856653  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:53.856825  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:53.856997  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:53.862186  174604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I1020 12:53:53.862600  174604 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:53:53.863048  174604 main.go:141] libmachine: Using API Version  1
	I1020 12:53:53.863070  174604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:53:53.863465  174604 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:53:53.863677  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetState
	I1020 12:53:53.865319  174604 main.go:141] libmachine: (test-preload-344364) Calling .DriverName
	I1020 12:53:53.865554  174604 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:53:53.865575  174604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:53:53.865597  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHHostname
	I1020 12:53:53.868825  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:53.869350  174604 main.go:141] libmachine: (test-preload-344364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:ca:1b", ip: ""} in network mk-test-preload-344364: {Iface:virbr1 ExpiryTime:2025-10-20 13:53:34 +0000 UTC Type:0 Mac:52:54:00:c7:ca:1b Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:test-preload-344364 Clientid:01:52:54:00:c7:ca:1b}
	I1020 12:53:53.869381  174604 main.go:141] libmachine: (test-preload-344364) DBG | domain test-preload-344364 has defined IP address 192.168.39.56 and MAC address 52:54:00:c7:ca:1b in network mk-test-preload-344364
	I1020 12:53:53.869572  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHPort
	I1020 12:53:53.869736  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHKeyPath
	I1020 12:53:53.869905  174604 main.go:141] libmachine: (test-preload-344364) Calling .GetSSHUsername
	I1020 12:53:53.870045  174604 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/test-preload-344364/id_rsa Username:docker}
	I1020 12:53:54.041271  174604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:53:54.062211  174604 node_ready.go:35] waiting up to 6m0s for node "test-preload-344364" to be "Ready" ...
	I1020 12:53:54.067059  174604 node_ready.go:49] node "test-preload-344364" is "Ready"
	I1020 12:53:54.067104  174604 node_ready.go:38] duration metric: took 4.811688ms for node "test-preload-344364" to be "Ready" ...
	I1020 12:53:54.067123  174604 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:53:54.067181  174604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:53:54.089241  174604 api_server.go:72] duration metric: took 278.607244ms to wait for apiserver process to appear ...
	I1020 12:53:54.089279  174604 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:53:54.089317  174604 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1020 12:53:54.094875  174604 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1020 12:53:54.095871  174604 api_server.go:141] control plane version: v1.32.0
	I1020 12:53:54.095895  174604 api_server.go:131] duration metric: took 6.607084ms to wait for apiserver health ...
	I1020 12:53:54.095906  174604 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:53:54.098695  174604 system_pods.go:59] 7 kube-system pods found
	I1020 12:53:54.098727  174604 system_pods.go:61] "coredns-668d6bf9bc-zhgl4" [66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:53:54.098737  174604 system_pods.go:61] "etcd-test-preload-344364" [428bc9d4-1b4e-4f2d-ba0a-35a04ba83fa4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:53:54.098745  174604 system_pods.go:61] "kube-apiserver-test-preload-344364" [ec479eef-002b-4ac1-8369-d2415ea28748] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:53:54.098754  174604 system_pods.go:61] "kube-controller-manager-test-preload-344364" [32883d91-dadd-4ec9-9d53-7706052f9c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:53:54.098762  174604 system_pods.go:61] "kube-proxy-l4s2d" [1db2cc76-14f7-425f-b539-04059baa8975] Running
	I1020 12:53:54.098770  174604 system_pods.go:61] "kube-scheduler-test-preload-344364" [2def8647-3dc9-48ee-8c2e-b02fc602c427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:53:54.098786  174604 system_pods.go:61] "storage-provisioner" [1b77c5e2-3352-46dd-90e2-d0a59bf09337] Running
	I1020 12:53:54.098794  174604 system_pods.go:74] duration metric: took 2.88044ms to wait for pod list to return data ...
	I1020 12:53:54.098806  174604 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:53:54.101004  174604 default_sa.go:45] found service account: "default"
	I1020 12:53:54.101022  174604 default_sa.go:55] duration metric: took 2.209459ms for default service account to be created ...
	I1020 12:53:54.101029  174604 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:53:54.103306  174604 system_pods.go:86] 7 kube-system pods found
	I1020 12:53:54.103332  174604 system_pods.go:89] "coredns-668d6bf9bc-zhgl4" [66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:53:54.103338  174604 system_pods.go:89] "etcd-test-preload-344364" [428bc9d4-1b4e-4f2d-ba0a-35a04ba83fa4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:53:54.103345  174604 system_pods.go:89] "kube-apiserver-test-preload-344364" [ec479eef-002b-4ac1-8369-d2415ea28748] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:53:54.103351  174604 system_pods.go:89] "kube-controller-manager-test-preload-344364" [32883d91-dadd-4ec9-9d53-7706052f9c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:53:54.103355  174604 system_pods.go:89] "kube-proxy-l4s2d" [1db2cc76-14f7-425f-b539-04059baa8975] Running
	I1020 12:53:54.103360  174604 system_pods.go:89] "kube-scheduler-test-preload-344364" [2def8647-3dc9-48ee-8c2e-b02fc602c427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:53:54.103364  174604 system_pods.go:89] "storage-provisioner" [1b77c5e2-3352-46dd-90e2-d0a59bf09337] Running
	I1020 12:53:54.103371  174604 system_pods.go:126] duration metric: took 2.336915ms to wait for k8s-apps to be running ...
	I1020 12:53:54.103380  174604 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:53:54.103449  174604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:53:54.119805  174604 system_svc.go:56] duration metric: took 16.415191ms WaitForService to wait for kubelet
	I1020 12:53:54.119831  174604 kubeadm.go:586] duration metric: took 309.20831ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:53:54.119847  174604 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:53:54.123578  174604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1020 12:53:54.123600  174604 node_conditions.go:123] node cpu capacity is 2
	I1020 12:53:54.123612  174604 node_conditions.go:105] duration metric: took 3.760834ms to run NodePressure ...
	I1020 12:53:54.123624  174604 start.go:241] waiting for startup goroutines ...
	I1020 12:53:54.235592  174604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:53:54.244495  174604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:53:54.855418  174604 main.go:141] libmachine: Making call to close driver server
	I1020 12:53:54.855449  174604 main.go:141] libmachine: (test-preload-344364) Calling .Close
	I1020 12:53:54.855523  174604 main.go:141] libmachine: Making call to close driver server
	I1020 12:53:54.855555  174604 main.go:141] libmachine: (test-preload-344364) Calling .Close
	I1020 12:53:54.855825  174604 main.go:141] libmachine: Successfully made call to close driver server
	I1020 12:53:54.855846  174604 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 12:53:54.855856  174604 main.go:141] libmachine: Making call to close driver server
	I1020 12:53:54.855863  174604 main.go:141] libmachine: (test-preload-344364) Calling .Close
	I1020 12:53:54.855875  174604 main.go:141] libmachine: (test-preload-344364) DBG | Closing plugin on server side
	I1020 12:53:54.855910  174604 main.go:141] libmachine: Successfully made call to close driver server
	I1020 12:53:54.855918  174604 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 12:53:54.855925  174604 main.go:141] libmachine: Making call to close driver server
	I1020 12:53:54.855932  174604 main.go:141] libmachine: (test-preload-344364) Calling .Close
	I1020 12:53:54.856113  174604 main.go:141] libmachine: Successfully made call to close driver server
	I1020 12:53:54.856129  174604 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 12:53:54.856140  174604 main.go:141] libmachine: Successfully made call to close driver server
	I1020 12:53:54.856155  174604 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 12:53:54.856118  174604 main.go:141] libmachine: (test-preload-344364) DBG | Closing plugin on server side
	I1020 12:53:54.862336  174604 main.go:141] libmachine: Making call to close driver server
	I1020 12:53:54.862352  174604 main.go:141] libmachine: (test-preload-344364) Calling .Close
	I1020 12:53:54.862572  174604 main.go:141] libmachine: Successfully made call to close driver server
	I1020 12:53:54.862587  174604 main.go:141] libmachine: Making call to close connection to plugin binary
	I1020 12:53:54.864269  174604 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:53:54.865282  174604 addons.go:514] duration metric: took 1.05460806s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:53:54.865320  174604 start.go:246] waiting for cluster config update ...
	I1020 12:53:54.865331  174604 start.go:255] writing updated cluster config ...
	I1020 12:53:54.865593  174604 ssh_runner.go:195] Run: rm -f paused
	I1020 12:53:54.870508  174604 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:53:54.870910  174604 kapi.go:59] client config for test-preload-344364: &rest.Config{Host:"https://192.168.39.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/profiles/test-preload-344364/client.key", CAFile:"/home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 12:53:54.873966  174604 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-zhgl4" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:53:56.879078  174604 pod_ready.go:104] pod "coredns-668d6bf9bc-zhgl4" is not "Ready", error: <nil>
	W1020 12:53:58.879832  174604 pod_ready.go:104] pod "coredns-668d6bf9bc-zhgl4" is not "Ready", error: <nil>
	W1020 12:54:01.380363  174604 pod_ready.go:104] pod "coredns-668d6bf9bc-zhgl4" is not "Ready", error: <nil>
	W1020 12:54:03.880241  174604 pod_ready.go:104] pod "coredns-668d6bf9bc-zhgl4" is not "Ready", error: <nil>
	I1020 12:54:05.380237  174604 pod_ready.go:94] pod "coredns-668d6bf9bc-zhgl4" is "Ready"
	I1020 12:54:05.380271  174604 pod_ready.go:86] duration metric: took 10.50628129s for pod "coredns-668d6bf9bc-zhgl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.381882  174604 pod_ready.go:83] waiting for pod "etcd-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.388004  174604 pod_ready.go:94] pod "etcd-test-preload-344364" is "Ready"
	I1020 12:54:05.388036  174604 pod_ready.go:86] duration metric: took 6.124904ms for pod "etcd-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.391549  174604 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.394678  174604 pod_ready.go:94] pod "kube-apiserver-test-preload-344364" is "Ready"
	I1020 12:54:05.394694  174604 pod_ready.go:86] duration metric: took 3.125722ms for pod "kube-apiserver-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.396944  174604 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.578263  174604 pod_ready.go:94] pod "kube-controller-manager-test-preload-344364" is "Ready"
	I1020 12:54:05.578290  174604 pod_ready.go:86] duration metric: took 181.328991ms for pod "kube-controller-manager-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:05.777481  174604 pod_ready.go:83] waiting for pod "kube-proxy-l4s2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:06.178085  174604 pod_ready.go:94] pod "kube-proxy-l4s2d" is "Ready"
	I1020 12:54:06.178118  174604 pod_ready.go:86] duration metric: took 400.609461ms for pod "kube-proxy-l4s2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:06.377180  174604 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:06.777454  174604 pod_ready.go:94] pod "kube-scheduler-test-preload-344364" is "Ready"
	I1020 12:54:06.777484  174604 pod_ready.go:86] duration metric: took 400.271744ms for pod "kube-scheduler-test-preload-344364" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:54:06.777495  174604 pod_ready.go:40] duration metric: took 11.906963933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:54:06.824165  174604 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1020 12:54:06.825833  174604 out.go:203] 
	W1020 12:54:06.826880  174604 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1020 12:54:06.827835  174604 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1020 12:54:06.828898  174604 out.go:179] * Done! kubectl is now configured to use "test-preload-344364" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.731332717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964847731309165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61c220ba-70dd-4b68-992e-e22c01081820 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.732097509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41230719-b709-46c6-917f-4c7b0f332ae1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.732317682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41230719-b709-46c6-917f-4c7b0f332ae1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.732585505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609fa2b52f458f22987e18bc8532f4b7318956a5f8c93f66decc1fa26b595863,PodSandboxId:d0f516a7924665186471bfb9a2f94bc2cc94b98047f71012058d224f114c59c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760964836413140355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zhgl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4dfdf4d5f1f02cf2c5653d016e63e33963971f81e292d332ce2735a9893d4f,PodSandboxId:27ab91f3fccd648e066c07b682f5b9939d588f330ca760aa7ebb7d4ab6bdb17f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760964832775610352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l4s2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1db2cc76-14f7-425f-b539-04059baa8975,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f80f9a198c596604592c9eb4af7b56caff38710ec028231db155cf8303850784,PodSandboxId:7e9aae429fd74cbe0446aca821177429ec85822ed9e14ebadfcc4c43d47f6dc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760964832765744814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b
77c5e2-3352-46dd-90e2-d0a59bf09337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b11ce9a8edbddd354a26ad98f460b7d3bfc24504b64e3baea0c2e0a57de53,PodSandboxId:1cdbf6412894afcad7525838677c45250e444f6a15c871be3adbe39fb8e37f50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760964828645925560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3151fd2e
e758d1d60faa1ac436c1b12,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ecb0417afb1fe6f39b710f28842427fe1261b575f0005cadee29c6ba2d07a87,PodSandboxId:56a615898bd3dbd45c8c2c6e1501fd4c288e75f159f3492d220d40ce3fa70ef8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760964828662566845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e35fdad4bab1084e5ce6664b1b1550fe,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc03e67e902ac471f0d145ffb91ac20477116426581eed620e833b5429ede602,PodSandboxId:5ba22789e17ffb3c4a10214088280049349a9081c29183057d47c33a48e6a1ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760964828635489500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ad03d19e8b7be44a2f13b3eb608c6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39540611ec4111e30009d50fe913a0ee41bf355805cc6e399a6241ceb67d770,PodSandboxId:a0ba85921506bad611a6364152e5e881dbcc9a2146777e33a2fff841283bd397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760964828588549733,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4553c2cfb250f62743101eedf06f388b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41230719-b709-46c6-917f-4c7b0f332ae1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.769078472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90563dde-baa9-46e7-98e2-ea7712c25956 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.769139382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90563dde-baa9-46e7-98e2-ea7712c25956 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.770357832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8594d63-e993-4648-8c24-883398fb6314 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.770828419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964847770808316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8594d63-e993-4648-8c24-883398fb6314 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.771494979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47fc48d4-781d-4fb6-8048-a1d798c93092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.771723578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47fc48d4-781d-4fb6-8048-a1d798c93092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.771954411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609fa2b52f458f22987e18bc8532f4b7318956a5f8c93f66decc1fa26b595863,PodSandboxId:d0f516a7924665186471bfb9a2f94bc2cc94b98047f71012058d224f114c59c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760964836413140355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zhgl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4dfdf4d5f1f02cf2c5653d016e63e33963971f81e292d332ce2735a9893d4f,PodSandboxId:27ab91f3fccd648e066c07b682f5b9939d588f330ca760aa7ebb7d4ab6bdb17f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760964832775610352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l4s2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1db2cc76-14f7-425f-b539-04059baa8975,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f80f9a198c596604592c9eb4af7b56caff38710ec028231db155cf8303850784,PodSandboxId:7e9aae429fd74cbe0446aca821177429ec85822ed9e14ebadfcc4c43d47f6dc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760964832765744814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b
77c5e2-3352-46dd-90e2-d0a59bf09337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b11ce9a8edbddd354a26ad98f460b7d3bfc24504b64e3baea0c2e0a57de53,PodSandboxId:1cdbf6412894afcad7525838677c45250e444f6a15c871be3adbe39fb8e37f50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760964828645925560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3151fd2e
e758d1d60faa1ac436c1b12,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ecb0417afb1fe6f39b710f28842427fe1261b575f0005cadee29c6ba2d07a87,PodSandboxId:56a615898bd3dbd45c8c2c6e1501fd4c288e75f159f3492d220d40ce3fa70ef8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760964828662566845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e35fdad4bab1084e5ce6664b1b1550fe,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc03e67e902ac471f0d145ffb91ac20477116426581eed620e833b5429ede602,PodSandboxId:5ba22789e17ffb3c4a10214088280049349a9081c29183057d47c33a48e6a1ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760964828635489500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ad03d19e8b7be44a2f13b3eb608c6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39540611ec4111e30009d50fe913a0ee41bf355805cc6e399a6241ceb67d770,PodSandboxId:a0ba85921506bad611a6364152e5e881dbcc9a2146777e33a2fff841283bd397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760964828588549733,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4553c2cfb250f62743101eedf06f388b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47fc48d4-781d-4fb6-8048-a1d798c93092 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.806450282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98ba11be-20b0-4836-9dd9-fae50586ad5f name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.806549169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98ba11be-20b0-4836-9dd9-fae50586ad5f name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.808243945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af383bac-f36b-4389-b445-5b0c212b2c11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.808662274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964847808620456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af383bac-f36b-4389-b445-5b0c212b2c11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.809406617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be12733c-1d7e-4365-b29e-e0ceb86c6054 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.809469494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be12733c-1d7e-4365-b29e-e0ceb86c6054 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.809971467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609fa2b52f458f22987e18bc8532f4b7318956a5f8c93f66decc1fa26b595863,PodSandboxId:d0f516a7924665186471bfb9a2f94bc2cc94b98047f71012058d224f114c59c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760964836413140355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zhgl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4dfdf4d5f1f02cf2c5653d016e63e33963971f81e292d332ce2735a9893d4f,PodSandboxId:27ab91f3fccd648e066c07b682f5b9939d588f330ca760aa7ebb7d4ab6bdb17f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760964832775610352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l4s2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1db2cc76-14f7-425f-b539-04059baa8975,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f80f9a198c596604592c9eb4af7b56caff38710ec028231db155cf8303850784,PodSandboxId:7e9aae429fd74cbe0446aca821177429ec85822ed9e14ebadfcc4c43d47f6dc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760964832765744814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b
77c5e2-3352-46dd-90e2-d0a59bf09337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b11ce9a8edbddd354a26ad98f460b7d3bfc24504b64e3baea0c2e0a57de53,PodSandboxId:1cdbf6412894afcad7525838677c45250e444f6a15c871be3adbe39fb8e37f50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760964828645925560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3151fd2e
e758d1d60faa1ac436c1b12,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ecb0417afb1fe6f39b710f28842427fe1261b575f0005cadee29c6ba2d07a87,PodSandboxId:56a615898bd3dbd45c8c2c6e1501fd4c288e75f159f3492d220d40ce3fa70ef8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760964828662566845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e35fdad4bab1084e5ce6664b1b1550fe,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc03e67e902ac471f0d145ffb91ac20477116426581eed620e833b5429ede602,PodSandboxId:5ba22789e17ffb3c4a10214088280049349a9081c29183057d47c33a48e6a1ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760964828635489500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ad03d19e8b7be44a2f13b3eb608c6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39540611ec4111e30009d50fe913a0ee41bf355805cc6e399a6241ceb67d770,PodSandboxId:a0ba85921506bad611a6364152e5e881dbcc9a2146777e33a2fff841283bd397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760964828588549733,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4553c2cfb250f62743101eedf06f388b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be12733c-1d7e-4365-b29e-e0ceb86c6054 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.844391514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8180029d-d3cb-458a-832d-fa139fc83a73 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.844697257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8180029d-d3cb-458a-832d-fa139fc83a73 name=/runtime.v1.RuntimeService/Version
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.845874731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=737eabeb-c16b-405b-822f-effdf8579ed9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.846324080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964847846303486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=737eabeb-c16b-405b-822f-effdf8579ed9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.847001471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c65f371-b093-4825-98eb-2b1ce94771b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.847137237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c65f371-b093-4825-98eb-2b1ce94771b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 20 12:54:07 test-preload-344364 crio[830]: time="2025-10-20 12:54:07.847406201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:609fa2b52f458f22987e18bc8532f4b7318956a5f8c93f66decc1fa26b595863,PodSandboxId:d0f516a7924665186471bfb9a2f94bc2cc94b98047f71012058d224f114c59c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760964836413140355,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zhgl4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4dfdf4d5f1f02cf2c5653d016e63e33963971f81e292d332ce2735a9893d4f,PodSandboxId:27ab91f3fccd648e066c07b682f5b9939d588f330ca760aa7ebb7d4ab6bdb17f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760964832775610352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l4s2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1db2cc76-14f7-425f-b539-04059baa8975,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f80f9a198c596604592c9eb4af7b56caff38710ec028231db155cf8303850784,PodSandboxId:7e9aae429fd74cbe0446aca821177429ec85822ed9e14ebadfcc4c43d47f6dc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760964832765744814,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b
77c5e2-3352-46dd-90e2-d0a59bf09337,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf0b11ce9a8edbddd354a26ad98f460b7d3bfc24504b64e3baea0c2e0a57de53,PodSandboxId:1cdbf6412894afcad7525838677c45250e444f6a15c871be3adbe39fb8e37f50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760964828645925560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3151fd2e
e758d1d60faa1ac436c1b12,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ecb0417afb1fe6f39b710f28842427fe1261b575f0005cadee29c6ba2d07a87,PodSandboxId:56a615898bd3dbd45c8c2c6e1501fd4c288e75f159f3492d220d40ce3fa70ef8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760964828662566845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: e35fdad4bab1084e5ce6664b1b1550fe,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc03e67e902ac471f0d145ffb91ac20477116426581eed620e833b5429ede602,PodSandboxId:5ba22789e17ffb3c4a10214088280049349a9081c29183057d47c33a48e6a1ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760964828635489500,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ad03d19e8b7be44a2f13b3eb608c6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39540611ec4111e30009d50fe913a0ee41bf355805cc6e399a6241ceb67d770,PodSandboxId:a0ba85921506bad611a6364152e5e881dbcc9a2146777e33a2fff841283bd397,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760964828588549733,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-344364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4553c2cfb250f62743101eedf06f388b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c65f371-b093-4825-98eb-2b1ce94771b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	609fa2b52f458       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   d0f516a792466       coredns-668d6bf9bc-zhgl4
	1e4dfdf4d5f1f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   27ab91f3fccd6       kube-proxy-l4s2d
	f80f9a198c596       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   7e9aae429fd74       storage-provisioner
	1ecb0417afb1f       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   56a615898bd3d       kube-controller-manager-test-preload-344364
	bf0b11ce9a8ed       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   1cdbf6412894a       kube-scheduler-test-preload-344364
	cc03e67e902ac       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   5ba22789e17ff       etcd-test-preload-344364
	e39540611ec41       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   a0ba85921506b       kube-apiserver-test-preload-344364
	
	
	==> coredns [609fa2b52f458f22987e18bc8532f4b7318956a5f8c93f66decc1fa26b595863] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43090 - 24940 "HINFO IN 5747663837805699447.5765587251551006673. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038289493s
	
	
	==> describe nodes <==
	Name:               test-preload-344364
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-344364
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=test-preload-344364
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_52_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:52:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-344364
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:54:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:53:53 +0000   Mon, 20 Oct 2025 12:52:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:53:53 +0000   Mon, 20 Oct 2025 12:52:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:53:53 +0000   Mon, 20 Oct 2025 12:52:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:53:53 +0000   Mon, 20 Oct 2025 12:53:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    test-preload-344364
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c3f2aab07485389b776941170d1fc
	  System UUID:                952c3f2a-ab07-4853-89b7-76941170d1fc
	  Boot ID:                    bf2081b0-3c10-421c-a40a-044b4af0c05f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-zhgl4                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     100s
	  kube-system                 etcd-test-preload-344364                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-344364             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-344364    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-l4s2d                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-test-preload-344364             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 98s                  kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node test-preload-344364 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node test-preload-344364 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node test-preload-344364 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     105s                 kubelet          Node test-preload-344364 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  105s                 kubelet          Node test-preload-344364 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s                 kubelet          Node test-preload-344364 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Normal   NodeReady                104s                 kubelet          Node test-preload-344364 status is now: NodeReady
	  Normal   RegisteredNode           101s                 node-controller  Node test-preload-344364 event: Registered Node test-preload-344364 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-344364 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-344364 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-344364 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-344364 has been rebooted, boot id: bf2081b0-3c10-421c-a40a-044b4af0c05f
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-344364 event: Registered Node test-preload-344364 in Controller
	
	
	==> dmesg <==
	[Oct20 12:53] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000035] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007095] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.091182] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082878] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.094138] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.455066] kauditd_printk_skb: 177 callbacks suppressed
	[Oct20 12:54] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [cc03e67e902ac471f0d145ffb91ac20477116426581eed620e833b5429ede602] <==
	{"level":"info","ts":"2025-10-20T12:53:49.041999Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fd3c3974c415d44","local-member-id":"be139f16c87a8e87","added-peer-id":"be139f16c87a8e87","added-peer-peer-urls":["https://192.168.39.56:2380"]}
	{"level":"info","ts":"2025-10-20T12:53:49.042103Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fd3c3974c415d44","local-member-id":"be139f16c87a8e87","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:53:49.044221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T12:53:49.043302Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:53:49.064222Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T12:53:49.064675Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"be139f16c87a8e87","initial-advertise-peer-urls":["https://192.168.39.56:2380"],"listen-peer-urls":["https://192.168.39.56:2380"],"advertise-client-urls":["https://192.168.39.56:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.56:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T12:53:49.064716Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T12:53:49.064765Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2025-10-20T12:53:49.064784Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.56:2380"}
	{"level":"info","ts":"2025-10-20T12:53:50.714330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-20T12:53:50.714377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-20T12:53:50.714394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 received MsgPreVoteResp from be139f16c87a8e87 at term 2"}
	{"level":"info","ts":"2025-10-20T12:53:50.714405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became candidate at term 3"}
	{"level":"info","ts":"2025-10-20T12:53:50.714427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 received MsgVoteResp from be139f16c87a8e87 at term 3"}
	{"level":"info","ts":"2025-10-20T12:53:50.714435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be139f16c87a8e87 became leader at term 3"}
	{"level":"info","ts":"2025-10-20T12:53:50.714442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be139f16c87a8e87 elected leader be139f16c87a8e87 at term 3"}
	{"level":"info","ts":"2025-10-20T12:53:50.716629Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"be139f16c87a8e87","local-member-attributes":"{Name:test-preload-344364 ClientURLs:[https://192.168.39.56:2379]}","request-path":"/0/members/be139f16c87a8e87/attributes","cluster-id":"7fd3c3974c415d44","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T12:53:50.716715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:53:50.717245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T12:53:50.717550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T12:53:50.717565Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-20T12:53:50.717549Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-20T12:53:50.718114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-20T12:53:50.718414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-20T12:53:50.719019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.56:2379"}
	
	
	==> kernel <==
	 12:54:08 up 0 min,  0 users,  load average: 0.66, 0.18, 0.06
	Linux test-preload-344364 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e39540611ec4111e30009d50fe913a0ee41bf355805cc6e399a6241ceb67d770] <==
	I1020 12:53:51.845772       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 12:53:51.845821       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:53:51.845901       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:53:51.845975       1 shared_informer.go:320] Caches are synced for configmaps
	I1020 12:53:51.848481       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1020 12:53:51.862018       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1020 12:53:51.862048       1 policy_source.go:240] refreshing policies
	I1020 12:53:51.862089       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1020 12:53:51.862139       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 12:53:51.862240       1 aggregator.go:171] initial CRD sync complete...
	I1020 12:53:51.862246       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 12:53:51.862250       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 12:53:51.862254       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:53:51.873669       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1020 12:53:51.880748       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 12:53:51.923513       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:53:52.405653       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1020 12:53:52.750012       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:53:53.612747       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1020 12:53:53.642311       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1020 12:53:53.672821       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:53:53.679344       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:53:55.043623       1 controller.go:615] quota admission added evaluator for: endpoints
	I1020 12:53:55.399918       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1020 12:53:55.442462       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1ecb0417afb1fe6f39b710f28842427fe1261b575f0005cadee29c6ba2d07a87] <==
	I1020 12:53:55.043399       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1020 12:53:55.044597       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1020 12:53:55.045830       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1020 12:53:55.048069       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1020 12:53:55.048117       1 shared_informer.go:320] Caches are synced for namespace
	I1020 12:53:55.051785       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1020 12:53:55.051895       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1020 12:53:55.055104       1 shared_informer.go:320] Caches are synced for expand
	I1020 12:53:55.057420       1 shared_informer.go:320] Caches are synced for TTL
	I1020 12:53:55.057528       1 shared_informer.go:320] Caches are synced for resource quota
	I1020 12:53:55.061819       1 shared_informer.go:320] Caches are synced for taint
	I1020 12:53:55.061914       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:53:55.061991       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-344364"
	I1020 12:53:55.062037       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 12:53:55.067519       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1020 12:53:55.078860       1 shared_informer.go:320] Caches are synced for garbage collector
	I1020 12:53:55.090425       1 shared_informer.go:320] Caches are synced for attach detach
	I1020 12:53:55.091750       1 shared_informer.go:320] Caches are synced for garbage collector
	I1020 12:53:55.091765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:53:55.091771       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:53:55.406212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="366.095597ms"
	I1020 12:53:55.406308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.484µs"
	I1020 12:53:56.494507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.463µs"
	I1020 12:54:05.118666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.253123ms"
	I1020 12:54:05.118924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="124.084µs"
	
	
	==> kube-proxy [1e4dfdf4d5f1f02cf2c5653d016e63e33963971f81e292d332ce2735a9893d4f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1020 12:53:53.069219       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1020 12:53:53.078067       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.56"]
	E1020 12:53:53.078140       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:53:53.115851       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1020 12:53:53.115895       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1020 12:53:53.115919       1 server_linux.go:170] "Using iptables Proxier"
	I1020 12:53:53.118343       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:53:53.118572       1 server.go:497] "Version info" version="v1.32.0"
	I1020 12:53:53.118617       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:53:53.120306       1 config.go:199] "Starting service config controller"
	I1020 12:53:53.120341       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1020 12:53:53.120369       1 config.go:105] "Starting endpoint slice config controller"
	I1020 12:53:53.120374       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1020 12:53:53.122909       1 config.go:329] "Starting node config controller"
	I1020 12:53:53.122937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1020 12:53:53.220792       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1020 12:53:53.220819       1 shared_informer.go:320] Caches are synced for service config
	I1020 12:53:53.223088       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bf0b11ce9a8edbddd354a26ad98f460b7d3bfc24504b64e3baea0c2e0a57de53] <==
	I1020 12:53:49.593580       1 serving.go:386] Generated self-signed cert in-memory
	W1020 12:53:51.786534       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 12:53:51.786875       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 12:53:51.788896       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 12:53:51.788987       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 12:53:51.861279       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1020 12:53:51.861355       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:53:51.870101       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:53:51.870220       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 12:53:51.872583       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1020 12:53:51.872652       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:53:51.970981       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: I1020 12:53:51.897066    1153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: I1020 12:53:51.898566    1153 setters.go:602] "Node became not ready" node="test-preload-344364" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-20T12:53:51Z","lastTransitionTime":"2025-10-20T12:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: E1020 12:53:51.917242    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-344364\" already exists" pod="kube-system/kube-apiserver-test-preload-344364"
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: I1020 12:53:51.917277    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-344364"
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: E1020 12:53:51.926315    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-344364\" already exists" pod="kube-system/kube-controller-manager-test-preload-344364"
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: I1020 12:53:51.926333    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-344364"
	Oct 20 12:53:51 test-preload-344364 kubelet[1153]: E1020 12:53:51.933708    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-344364\" already exists" pod="kube-system/kube-scheduler-test-preload-344364"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: I1020 12:53:52.326887    1153 apiserver.go:52] "Watching apiserver"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: E1020 12:53:52.338915    1153 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zhgl4" podUID="66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: I1020 12:53:52.353748    1153 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: I1020 12:53:52.399276    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1db2cc76-14f7-425f-b539-04059baa8975-xtables-lock\") pod \"kube-proxy-l4s2d\" (UID: \"1db2cc76-14f7-425f-b539-04059baa8975\") " pod="kube-system/kube-proxy-l4s2d"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: I1020 12:53:52.399563    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1db2cc76-14f7-425f-b539-04059baa8975-lib-modules\") pod \"kube-proxy-l4s2d\" (UID: \"1db2cc76-14f7-425f-b539-04059baa8975\") " pod="kube-system/kube-proxy-l4s2d"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: I1020 12:53:52.399695    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1b77c5e2-3352-46dd-90e2-d0a59bf09337-tmp\") pod \"storage-provisioner\" (UID: \"1b77c5e2-3352-46dd-90e2-d0a59bf09337\") " pod="kube-system/storage-provisioner"
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: E1020 12:53:52.400774    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: E1020 12:53:52.401053    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume podName:66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6 nodeName:}" failed. No retries permitted until 2025-10-20 12:53:52.901033552 +0000 UTC m=+6.660507664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume") pod "coredns-668d6bf9bc-zhgl4" (UID: "66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6") : object "kube-system"/"coredns" not registered
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: E1020 12:53:52.902715    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 20 12:53:52 test-preload-344364 kubelet[1153]: E1020 12:53:52.904073    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume podName:66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6 nodeName:}" failed. No retries permitted until 2025-10-20 12:53:53.902938809 +0000 UTC m=+7.662412921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume") pod "coredns-668d6bf9bc-zhgl4" (UID: "66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6") : object "kube-system"/"coredns" not registered
	Oct 20 12:53:53 test-preload-344364 kubelet[1153]: I1020 12:53:53.683289    1153 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 20 12:53:53 test-preload-344364 kubelet[1153]: E1020 12:53:53.911667    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 20 12:53:53 test-preload-344364 kubelet[1153]: E1020 12:53:53.911739    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume podName:66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6 nodeName:}" failed. No retries permitted until 2025-10-20 12:53:55.911724513 +0000 UTC m=+9.671198625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6-config-volume") pod "coredns-668d6bf9bc-zhgl4" (UID: "66d8dd2f-1f35-45b1-84ca-d0b2ba9b52a6") : object "kube-system"/"coredns" not registered
	Oct 20 12:53:56 test-preload-344364 kubelet[1153]: E1020 12:53:56.401409    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964836401054111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 20 12:53:56 test-preload-344364 kubelet[1153]: E1020 12:53:56.401822    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964836401054111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 20 12:54:05 test-preload-344364 kubelet[1153]: I1020 12:54:05.083365    1153 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 12:54:06 test-preload-344364 kubelet[1153]: E1020 12:54:06.407614    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964846406767121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 20 12:54:06 test-preload-344364 kubelet[1153]: E1020 12:54:06.407637    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760964846406767121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f80f9a198c596604592c9eb4af7b56caff38710ec028231db155cf8303850784] <==
	I1020 12:53:53.005005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-344364 -n test-preload-344364
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-344364 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-344364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-344364
--- FAIL: TestPreload (158.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (73.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-651808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-651808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.612821729s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-651808] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-651808" primary control-plane node in "pause-651808" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-651808" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:00:23.533691  182901 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:00:23.534041  182901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:00:23.534059  182901 out.go:374] Setting ErrFile to fd 2...
	I1020 13:00:23.534067  182901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:00:23.534454  182901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 13:00:23.535212  182901 out.go:368] Setting JSON to false
	I1020 13:00:23.536650  182901 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6158,"bootTime":1760959065,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 13:00:23.536720  182901 start.go:141] virtualization: kvm guest
	I1020 13:00:23.538288  182901 out.go:179] * [pause-651808] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 13:00:23.539624  182901 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:00:23.539675  182901 notify.go:220] Checking for updates...
	I1020 13:00:23.541590  182901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:00:23.542595  182901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 13:00:23.543624  182901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 13:00:23.546755  182901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 13:00:23.548107  182901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:00:23.549925  182901 config.go:182] Loaded profile config "pause-651808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:00:23.550366  182901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 13:00:23.550455  182901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 13:00:23.564510  182901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I1020 13:00:23.565024  182901 main.go:141] libmachine: () Calling .GetVersion
	I1020 13:00:23.565629  182901 main.go:141] libmachine: Using API Version  1
	I1020 13:00:23.565655  182901 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 13:00:23.566101  182901 main.go:141] libmachine: () Calling .GetMachineName
	I1020 13:00:23.566358  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:23.566714  182901 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:00:23.567033  182901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 13:00:23.567079  182901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 13:00:23.585000  182901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I1020 13:00:23.585531  182901 main.go:141] libmachine: () Calling .GetVersion
	I1020 13:00:23.586042  182901 main.go:141] libmachine: Using API Version  1
	I1020 13:00:23.586067  182901 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 13:00:23.586459  182901 main.go:141] libmachine: () Calling .GetMachineName
	I1020 13:00:23.586689  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:23.623468  182901 out.go:179] * Using the kvm2 driver based on existing profile
	I1020 13:00:23.624469  182901 start.go:305] selected driver: kvm2
	I1020 13:00:23.624482  182901 start.go:925] validating driver "kvm2" against &{Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:00:23.624612  182901 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:00:23.624916  182901 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:00:23.624986  182901 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:00:23.639527  182901 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:00:23.639561  182901 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:00:23.655024  182901 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:00:23.655852  182901 cni.go:84] Creating CNI manager for ""
	I1020 13:00:23.655908  182901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 13:00:23.655973  182901 start.go:349] cluster config:
	{Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-651808 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:00:23.656099  182901 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:00:23.657837  182901 out.go:179] * Starting "pause-651808" primary control-plane node in "pause-651808" cluster
	I1020 13:00:23.658883  182901 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:00:23.658920  182901 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 13:00:23.658939  182901 cache.go:58] Caching tarball of preloaded images
	I1020 13:00:23.659028  182901 preload.go:233] Found /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1020 13:00:23.659038  182901 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:00:23.659175  182901 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/config.json ...
	I1020 13:00:23.659432  182901 start.go:360] acquireMachinesLock for pause-651808: {Name:mk7379f3db3d78bd88fb45ecf1a2b8c8492f1da9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1020 13:00:49.135540  182901 start.go:364] duration metric: took 25.476061197s to acquireMachinesLock for "pause-651808"
	I1020 13:00:49.135594  182901 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:00:49.135601  182901 fix.go:54] fixHost starting: 
	I1020 13:00:49.136095  182901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 13:00:49.136153  182901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 13:00:49.154352  182901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1020 13:00:49.154836  182901 main.go:141] libmachine: () Calling .GetVersion
	I1020 13:00:49.155329  182901 main.go:141] libmachine: Using API Version  1
	I1020 13:00:49.155356  182901 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 13:00:49.155733  182901 main.go:141] libmachine: () Calling .GetMachineName
	I1020 13:00:49.155972  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:49.156125  182901 main.go:141] libmachine: (pause-651808) Calling .GetState
	I1020 13:00:49.158072  182901 fix.go:112] recreateIfNeeded on pause-651808: state=Running err=<nil>
	W1020 13:00:49.158098  182901 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 13:00:49.160294  182901 out.go:252] * Updating the running kvm2 "pause-651808" VM ...
	I1020 13:00:49.160325  182901 machine.go:93] provisionDockerMachine start ...
	I1020 13:00:49.160341  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:49.160593  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:49.163605  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.164058  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:49.164115  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.164299  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:49.164500  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.164683  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.164839  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:49.165043  182901 main.go:141] libmachine: Using SSH client type: native
	I1020 13:00:49.165306  182901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1020 13:00:49.165320  182901 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:00:49.277312  182901 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-651808
	
	I1020 13:00:49.277350  182901 main.go:141] libmachine: (pause-651808) Calling .GetMachineName
	I1020 13:00:49.277680  182901 buildroot.go:166] provisioning hostname "pause-651808"
	I1020 13:00:49.277717  182901 main.go:141] libmachine: (pause-651808) Calling .GetMachineName
	I1020 13:00:49.277982  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:49.281546  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.282008  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:49.282029  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.282272  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:49.282496  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.282682  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.282958  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:49.283188  182901 main.go:141] libmachine: Using SSH client type: native
	I1020 13:00:49.283436  182901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1020 13:00:49.283453  182901 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-651808 && echo "pause-651808" | sudo tee /etc/hostname
	I1020 13:00:49.409000  182901 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-651808
	
	I1020 13:00:49.409035  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:49.412697  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.413200  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:49.413234  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.413515  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:49.413763  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.413998  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:49.414221  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:49.414455  182901 main.go:141] libmachine: Using SSH client type: native
	I1020 13:00:49.414747  182901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1020 13:00:49.414772  182901 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-651808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-651808/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-651808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:00:49.524052  182901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:00:49.524099  182901 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21773-139101/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-139101/.minikube}
	I1020 13:00:49.524127  182901 buildroot.go:174] setting up certificates
	I1020 13:00:49.524138  182901 provision.go:84] configureAuth start
	I1020 13:00:49.524151  182901 main.go:141] libmachine: (pause-651808) Calling .GetMachineName
	I1020 13:00:49.524517  182901 main.go:141] libmachine: (pause-651808) Calling .GetIP
	I1020 13:00:49.528202  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.528702  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:49.528726  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.528962  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:49.531568  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.532057  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:49.532092  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:49.532244  182901 provision.go:143] copyHostCerts
	I1020 13:00:49.532310  182901 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem, removing ...
	I1020 13:00:49.532320  182901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem
	I1020 13:00:49.532383  182901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/ca.pem (1082 bytes)
	I1020 13:00:49.532538  182901 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem, removing ...
	I1020 13:00:49.532551  182901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem
	I1020 13:00:49.532575  182901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/cert.pem (1123 bytes)
	I1020 13:00:49.532665  182901 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem, removing ...
	I1020 13:00:49.532676  182901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem
	I1020 13:00:49.532701  182901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-139101/.minikube/key.pem (1675 bytes)
	I1020 13:00:49.532763  182901 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem org=jenkins.pause-651808 san=[127.0.0.1 192.168.39.100 localhost minikube pause-651808]
	I1020 13:00:50.181308  182901 provision.go:177] copyRemoteCerts
	I1020 13:00:50.181368  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:00:50.181395  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:50.184580  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:50.185035  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:50.185064  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:50.185263  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:50.185504  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:50.185725  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:50.185919  182901 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/pause-651808/id_rsa Username:docker}
	I1020 13:00:50.273439  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1020 13:00:50.307112  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 13:00:50.340475  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 13:00:50.381589  182901 provision.go:87] duration metric: took 857.43388ms to configureAuth
	I1020 13:00:50.381629  182901 buildroot.go:189] setting minikube options for container-runtime
	I1020 13:00:50.381864  182901 config.go:182] Loaded profile config "pause-651808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:00:50.381938  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:50.385310  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:50.385763  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:50.385795  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:50.386011  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:50.386187  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:50.386336  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:50.386500  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:50.386704  182901 main.go:141] libmachine: Using SSH client type: native
	I1020 13:00:50.386956  182901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1020 13:00:50.386972  182901 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:00:57.866622  182901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:00:57.866652  182901 machine.go:96] duration metric: took 8.706317711s to provisionDockerMachine
	I1020 13:00:57.866669  182901 start.go:293] postStartSetup for "pause-651808" (driver="kvm2")
	I1020 13:00:57.866682  182901 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:00:57.866728  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:57.867169  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:00:57.867219  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:57.870651  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:57.871179  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:57.871214  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:57.871456  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:57.871693  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:57.871856  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:57.872009  182901 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/pause-651808/id_rsa Username:docker}
	I1020 13:00:57.955994  182901 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:00:57.960781  182901 info.go:137] Remote host: Buildroot 2025.02
	I1020 13:00:57.960805  182901 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/addons for local assets ...
	I1020 13:00:57.960875  182901 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-139101/.minikube/files for local assets ...
	I1020 13:00:57.960965  182901 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem -> 1431312.pem in /etc/ssl/certs
	I1020 13:00:57.961093  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:00:57.972962  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 13:00:58.001249  182901 start.go:296] duration metric: took 134.564871ms for postStartSetup
	I1020 13:00:58.001289  182901 fix.go:56] duration metric: took 8.865687986s for fixHost
	I1020 13:00:58.001313  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:58.004240  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.004681  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:58.004711  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.004890  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:58.005062  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:58.005250  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:58.005371  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:58.005527  182901 main.go:141] libmachine: Using SSH client type: native
	I1020 13:00:58.005738  182901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1020 13:00:58.005753  182901 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1020 13:00:58.113006  182901 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760965258.106815671
	
	I1020 13:00:58.113031  182901 fix.go:216] guest clock: 1760965258.106815671
	I1020 13:00:58.113040  182901 fix.go:229] Guest: 2025-10-20 13:00:58.106815671 +0000 UTC Remote: 2025-10-20 13:00:58.001294229 +0000 UTC m=+34.511500593 (delta=105.521442ms)
	I1020 13:00:58.113081  182901 fix.go:200] guest clock delta is within tolerance: 105.521442ms
	I1020 13:00:58.113089  182901 start.go:83] releasing machines lock for "pause-651808", held for 8.977517509s
	I1020 13:00:58.113116  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:58.113370  182901 main.go:141] libmachine: (pause-651808) Calling .GetIP
	I1020 13:00:58.116804  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.117228  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:58.117259  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.117425  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:58.117946  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:58.118142  182901 main.go:141] libmachine: (pause-651808) Calling .DriverName
	I1020 13:00:58.118254  182901 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:00:58.118311  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:58.118373  182901 ssh_runner.go:195] Run: cat /version.json
	I1020 13:00:58.118413  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHHostname
	I1020 13:00:58.121609  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.121833  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.122005  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:58.122033  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.122178  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:58.122336  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:00:58.122347  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:58.122357  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:00:58.122541  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHPort
	I1020 13:00:58.122557  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:58.122735  182901 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/pause-651808/id_rsa Username:docker}
	I1020 13:00:58.122747  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHKeyPath
	I1020 13:00:58.122993  182901 main.go:141] libmachine: (pause-651808) Calling .GetSSHUsername
	I1020 13:00:58.123145  182901 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/pause-651808/id_rsa Username:docker}
	I1020 13:00:58.210085  182901 ssh_runner.go:195] Run: systemctl --version
	I1020 13:00:58.242684  182901 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:00:58.399357  182901 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:00:58.407014  182901 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:00:58.407073  182901 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:00:58.420846  182901 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:00:58.420874  182901 start.go:495] detecting cgroup driver to use...
	I1020 13:00:58.420947  182901 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:00:58.446396  182901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:00:58.465269  182901 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:00:58.465332  182901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:00:58.486082  182901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:00:58.502667  182901 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:00:58.828925  182901 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:00:59.137086  182901 docker.go:234] disabling docker service ...
	I1020 13:00:59.137186  182901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:00:59.183577  182901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:00:59.227554  182901 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:00:59.629395  182901 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:01:00.037919  182901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:01:00.062605  182901 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:01:00.097100  182901 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:01:00.097223  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.119323  182901 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:01:00.119439  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.147511  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.196895  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.230371  182901 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:01:00.252723  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.273491  182901 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.295722  182901 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:01:00.316982  182901 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:01:00.338588  182901 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:01:00.355501  182901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:01:00.648391  182901 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:01:10.768973  182901 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.120522514s)
	I1020 13:01:10.769008  182901 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:01:10.769063  182901 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:01:10.776313  182901 start.go:563] Will wait 60s for crictl version
	I1020 13:01:10.776422  182901 ssh_runner.go:195] Run: which crictl
	I1020 13:01:10.781678  182901 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1020 13:01:10.828392  182901 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1020 13:01:10.828513  182901 ssh_runner.go:195] Run: crio --version
	I1020 13:01:10.861158  182901 ssh_runner.go:195] Run: crio --version
	I1020 13:01:10.904142  182901 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1020 13:01:10.905128  182901 main.go:141] libmachine: (pause-651808) Calling .GetIP
	I1020 13:01:10.909245  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.909774  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:01:10.909805  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.910045  182901 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1020 13:01:10.914963  182901 kubeadm.go:883] updating cluster {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:01:10.915096  182901 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:01:10.915153  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:10.972608  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:10.972637  182901 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:01:10.972692  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:11.019953  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:11.019976  182901 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:01:11.019986  182901 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1020 13:01:11.020097  182901 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-651808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:01:11.020193  182901 ssh_runner.go:195] Run: crio config
	I1020 13:01:11.070657  182901 cni.go:84] Creating CNI manager for ""
	I1020 13:01:11.070683  182901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 13:01:11.070709  182901 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:01:11.070738  182901 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-651808 NodeName:pause-651808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:01:11.070918  182901 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-651808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:01:11.070996  182901 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:01:11.082946  182901 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:01:11.083006  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:01:11.094450  182901 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1020 13:01:11.116763  182901 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:01:11.136130  182901 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:01:11.155609  182901 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1020 13:01:11.159837  182901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:01:11.327134  182901 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:01:11.344329  182901 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808 for IP: 192.168.39.100
	I1020 13:01:11.344359  182901 certs.go:195] generating shared ca certs ...
	I1020 13:01:11.344396  182901 certs.go:227] acquiring lock for ca certs: {Name:mk4d0d22cc1ac40184675be8ad2f5fa8f1c0ffc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:01:11.344590  182901 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key
	I1020 13:01:11.344647  182901 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key
	I1020 13:01:11.344659  182901 certs.go:257] generating profile certs ...
	I1020 13:01:11.344772  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/client.key
	I1020 13:01:11.344842  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key.744b6a91
	I1020 13:01:11.344913  182901 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key
	I1020 13:01:11.345081  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem (1338 bytes)
	W1020 13:01:11.345131  182901 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131_empty.pem, impossibly tiny 0 bytes
	I1020 13:01:11.345148  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:01:11.345193  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem (1082 bytes)
	I1020 13:01:11.345230  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:01:11.345286  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem (1675 bytes)
	I1020 13:01:11.345351  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 13:01:11.346433  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:01:11.377395  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1020 13:01:11.405728  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:01:11.439143  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:01:11.472697  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 13:01:11.501296  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 13:01:11.533783  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:01:11.567494  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:01:11.605276  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /usr/share/ca-certificates/1431312.pem (1708 bytes)
	I1020 13:01:11.644228  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:01:11.676093  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem --> /usr/share/ca-certificates/143131.pem (1338 bytes)
	I1020 13:01:11.706583  182901 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:01:11.737851  182901 ssh_runner.go:195] Run: openssl version
	I1020 13:01:11.753072  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1431312.pem && ln -fs /usr/share/ca-certificates/1431312.pem /etc/ssl/certs/1431312.pem"
	I1020 13:01:11.826314  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836671  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:06 /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836752  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.855720  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1431312.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:01:11.880887  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:01:11.909865  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925033  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925110  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.955267  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:01:11.983925  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143131.pem && ln -fs /usr/share/ca-certificates/143131.pem /etc/ssl/certs/143131.pem"
	I1020 13:01:12.014906  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032542  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:06 /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032633  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.044972  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143131.pem /etc/ssl/certs/51391683.0"
	I1020 13:01:12.062846  182901 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:01:12.077800  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:01:12.087925  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:01:12.106951  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:01:12.123664  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:01:12.137647  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:01:12.150444  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:01:12.172138  182901 kubeadm.go:400] StartCluster: {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:01:12.172274  182901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:01:12.172330  182901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:01:12.271884  182901 cri.go:89] found id: "f937159c10983cdf27318cffae7d70f957ae835c092ca5849879c2619ee075de"
	I1020 13:01:12.271915  182901 cri.go:89] found id: "cfaadcf1551adbd8ed1e3dfd9da76fe0bf403087f26aa5cf6fae83fea3eafb96"
	I1020 13:01:12.271921  182901 cri.go:89] found id: "e76b8e65ba0906eb8da1569ec66689093a4a02e97ce048d85aa98c125eb4353c"
	I1020 13:01:12.271925  182901 cri.go:89] found id: "a68b93f2f906643bafbdb6fe76f0d8c7fd1823da764312dd89c22dd8a0d42046"
	I1020 13:01:12.271927  182901 cri.go:89] found id: "7e2570bfa23e4f5344178f25877a37320166b734c37a9a83350a47ffe1d65eb3"
	I1020 13:01:12.271930  182901 cri.go:89] found id: "7b266597b87884d5f7a2198db73243281c52eab027e66a27b7f9ae6f68d392cf"
	I1020 13:01:12.271934  182901 cri.go:89] found id: "e7d8bd7d83a224579b802ce9e37a6d39724a87bfac159c11737374d5fa62fa42"
	I1020 13:01:12.271938  182901 cri.go:89] found id: "d193e3dc4d67d1276b3990e16a09896ace1e959881335e7ff9c8bf0d0cb43d39"
	I1020 13:01:12.271942  182901 cri.go:89] found id: "94fbfa6f3eb3ec5373543cd11cf8056bf549891df1ca276ab84c697f66e9971c"
	I1020 13:01:12.271952  182901 cri.go:89] found id: "4bcb74cbb86c7341504a72c9e04f508de706ced1c2a9c7d051263f6216b8f3ca"
	I1020 13:01:12.271957  182901 cri.go:89] found id: ""
	I1020 13:01:12.272016  182901 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-651808 -n pause-651808
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-651808 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-651808 logs -n 25: (1.692131699s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                         │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │                     │
	│ start   │ -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:59 UTC │
	│ ssh     │ -p NoKubernetes-518209 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │                     │
	│ stop    │ -p NoKubernetes-518209                                                                                                                                                                                                                              │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:58 UTC │
	│ start   │ -p NoKubernetes-518209 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p running-upgrade-066492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-017504 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ stopped-upgrade-017504    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-017504                                                                                                                                                                                                                           │ stopped-upgrade-017504    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p pause-651808 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-651808              │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ delete  │ -p kubernetes-upgrade-486976                                                                                                                                                                                                                        │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p cert-expiration-324693 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-324693    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ ssh     │ -p NoKubernetes-518209 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p NoKubernetes-518209                                                                                                                                                                                                                              │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p force-systemd-flag-850858 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                               │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-066492 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p running-upgrade-066492                                                                                                                                                                                                                           │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p cert-options-341854 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:01 UTC │
	│ start   │ -p pause-651808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-651808              │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:01 UTC │
	│ ssh     │ force-systemd-flag-850858 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:00 UTC │
	│ delete  │ -p force-systemd-flag-850858                                                                                                                                                                                                                        │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:00 UTC │
	│ start   │ -p auto-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                   │ auto-126965               │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │                     │
	│ ssh     │ cert-options-341854 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ ssh     │ -p cert-options-341854 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ delete  │ -p cert-options-341854                                                                                                                                                                                                                              │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ start   │ -p kindnet-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kindnet-126965            │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:01:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:01:11.039442  183769 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:01:11.039576  183769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:11.039588  183769 out.go:374] Setting ErrFile to fd 2...
	I1020 13:01:11.039595  183769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:11.039822  183769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 13:01:11.040314  183769 out.go:368] Setting JSON to false
	I1020 13:01:11.041366  183769 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6206,"bootTime":1760959065,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 13:01:11.041484  183769 start.go:141] virtualization: kvm guest
	I1020 13:01:11.043475  183769 out.go:179] * [kindnet-126965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 13:01:11.044627  183769 notify.go:220] Checking for updates...
	I1020 13:01:11.044638  183769 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:01:11.045700  183769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:01:11.046787  183769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 13:01:11.047795  183769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 13:01:11.048710  183769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 13:01:11.049713  183769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:01:11.051037  183769 config.go:182] Loaded profile config "auto-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051168  183769 config.go:182] Loaded profile config "cert-expiration-324693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051291  183769 config.go:182] Loaded profile config "pause-651808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051360  183769 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:01:11.089387  183769 out.go:179] * Using the kvm2 driver based on user configuration
	I1020 13:01:11.090673  183769 start.go:305] selected driver: kvm2
	I1020 13:01:11.090696  183769 start.go:925] validating driver "kvm2" against <nil>
	I1020 13:01:11.090713  183769 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:01:11.091380  183769 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:01:11.091498  183769 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:01:11.109427  183769 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:01:11.109475  183769 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:01:11.124329  183769 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:01:11.124393  183769 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:01:11.124682  183769 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:01:11.124709  183769 cni.go:84] Creating CNI manager for "kindnet"
	I1020 13:01:11.124716  183769 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:01:11.124760  183769 start.go:349] cluster config:
	{Name:kindnet-126965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-126965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:01:11.124860  183769 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:01:11.126616  183769 out.go:179] * Starting "kindnet-126965" primary control-plane node in "kindnet-126965" cluster
	I1020 13:01:10.905128  182901 main.go:141] libmachine: (pause-651808) Calling .GetIP
	I1020 13:01:10.909245  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.909774  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:01:10.909805  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.910045  182901 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1020 13:01:10.914963  182901 kubeadm.go:883] updating cluster {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:01:10.915096  182901 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:01:10.915153  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:10.972608  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:10.972637  182901 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:01:10.972692  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:11.019953  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:11.019976  182901 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:01:11.019986  182901 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1020 13:01:11.020097  182901 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-651808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:01:11.020193  182901 ssh_runner.go:195] Run: crio config
	I1020 13:01:11.070657  182901 cni.go:84] Creating CNI manager for ""
	I1020 13:01:11.070683  182901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 13:01:11.070709  182901 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:01:11.070738  182901 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-651808 NodeName:pause-651808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:01:11.070918  182901 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-651808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:01:11.070996  182901 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:01:11.082946  182901 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:01:11.083006  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:01:11.094450  182901 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1020 13:01:11.116763  182901 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:01:11.136130  182901 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:01:11.155609  182901 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1020 13:01:11.159837  182901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:01:11.327134  182901 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:01:11.344329  182901 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808 for IP: 192.168.39.100
	I1020 13:01:11.344359  182901 certs.go:195] generating shared ca certs ...
	I1020 13:01:11.344396  182901 certs.go:227] acquiring lock for ca certs: {Name:mk4d0d22cc1ac40184675be8ad2f5fa8f1c0ffc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:01:11.344590  182901 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key
	I1020 13:01:11.344647  182901 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key
	I1020 13:01:11.344659  182901 certs.go:257] generating profile certs ...
	I1020 13:01:11.344772  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/client.key
	I1020 13:01:11.344842  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key.744b6a91
	I1020 13:01:11.344913  182901 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key
	I1020 13:01:11.345081  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem (1338 bytes)
	W1020 13:01:11.345131  182901 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131_empty.pem, impossibly tiny 0 bytes
	I1020 13:01:11.345148  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:01:11.345193  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem (1082 bytes)
	I1020 13:01:11.345230  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:01:11.345286  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem (1675 bytes)
	I1020 13:01:11.345351  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 13:01:11.346433  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:01:11.377395  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1020 13:01:11.405728  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:01:11.439143  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:01:11.472697  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 13:01:11.501296  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 13:01:11.533783  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:01:11.567494  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:01:11.605276  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /usr/share/ca-certificates/1431312.pem (1708 bytes)
	I1020 13:01:11.644228  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:01:11.676093  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem --> /usr/share/ca-certificates/143131.pem (1338 bytes)
	I1020 13:01:11.706583  182901 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:01:11.737851  182901 ssh_runner.go:195] Run: openssl version
	I1020 13:01:11.753072  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1431312.pem && ln -fs /usr/share/ca-certificates/1431312.pem /etc/ssl/certs/1431312.pem"
	I1020 13:01:11.826314  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836671  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:06 /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836752  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.855720  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1431312.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:01:11.880887  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:01:11.909865  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925033  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925110  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.955267  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:01:11.983925  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143131.pem && ln -fs /usr/share/ca-certificates/143131.pem /etc/ssl/certs/143131.pem"
	I1020 13:01:12.014906  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032542  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:06 /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032633  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.044972  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143131.pem /etc/ssl/certs/51391683.0"
	I1020 13:01:12.062846  182901 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:01:12.077800  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:01:12.087925  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:01:12.106951  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:01:12.123664  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:01:12.137647  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:01:12.150444  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:01:12.172138  182901 kubeadm.go:400] StartCluster: {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:01:12.172274  182901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:01:12.172330  182901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:01:12.271884  182901 cri.go:89] found id: "f937159c10983cdf27318cffae7d70f957ae835c092ca5849879c2619ee075de"
	I1020 13:01:12.271915  182901 cri.go:89] found id: "cfaadcf1551adbd8ed1e3dfd9da76fe0bf403087f26aa5cf6fae83fea3eafb96"
	I1020 13:01:12.271921  182901 cri.go:89] found id: "e76b8e65ba0906eb8da1569ec66689093a4a02e97ce048d85aa98c125eb4353c"
	I1020 13:01:12.271925  182901 cri.go:89] found id: "a68b93f2f906643bafbdb6fe76f0d8c7fd1823da764312dd89c22dd8a0d42046"
	I1020 13:01:12.271927  182901 cri.go:89] found id: "7e2570bfa23e4f5344178f25877a37320166b734c37a9a83350a47ffe1d65eb3"
	I1020 13:01:12.271930  182901 cri.go:89] found id: "7b266597b87884d5f7a2198db73243281c52eab027e66a27b7f9ae6f68d392cf"
	I1020 13:01:12.271934  182901 cri.go:89] found id: "e7d8bd7d83a224579b802ce9e37a6d39724a87bfac159c11737374d5fa62fa42"
	I1020 13:01:12.271938  182901 cri.go:89] found id: "d193e3dc4d67d1276b3990e16a09896ace1e959881335e7ff9c8bf0d0cb43d39"
	I1020 13:01:12.271942  182901 cri.go:89] found id: "94fbfa6f3eb3ec5373543cd11cf8056bf549891df1ca276ab84c697f66e9971c"
	I1020 13:01:12.271952  182901 cri.go:89] found id: "4bcb74cbb86c7341504a72c9e04f508de706ced1c2a9c7d051263f6216b8f3ca"
	I1020 13:01:12.271957  182901 cri.go:89] found id: ""
	I1020 13:01:12.272016  182901 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-651808 -n pause-651808
helpers_test.go:269: (dbg) Run:  kubectl --context pause-651808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-651808 -n pause-651808
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-651808 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-651808 logs -n 25: (1.559418904s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                         │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │                     │
	│ start   │ -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:59 UTC │
	│ ssh     │ -p NoKubernetes-518209 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │                     │
	│ stop    │ -p NoKubernetes-518209                                                                                                                                                                                                                              │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:58 UTC │
	│ start   │ -p NoKubernetes-518209 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:58 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p running-upgrade-066492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-017504 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ stopped-upgrade-017504    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-017504                                                                                                                                                                                                                           │ stopped-upgrade-017504    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p pause-651808 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-651808              │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ delete  │ -p kubernetes-upgrade-486976                                                                                                                                                                                                                        │ kubernetes-upgrade-486976 │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p cert-expiration-324693 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-324693    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ ssh     │ -p NoKubernetes-518209 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p NoKubernetes-518209                                                                                                                                                                                                                              │ NoKubernetes-518209       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p force-systemd-flag-850858 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                               │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-066492 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │                     │
	│ delete  │ -p running-upgrade-066492                                                                                                                                                                                                                           │ running-upgrade-066492    │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 12:59 UTC │
	│ start   │ -p cert-options-341854 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 12:59 UTC │ 20 Oct 25 13:01 UTC │
	│ start   │ -p pause-651808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-651808              │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:01 UTC │
	│ ssh     │ force-systemd-flag-850858 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:00 UTC │
	│ delete  │ -p force-systemd-flag-850858                                                                                                                                                                                                                        │ force-systemd-flag-850858 │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │ 20 Oct 25 13:00 UTC │
	│ start   │ -p auto-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                   │ auto-126965               │ jenkins │ v1.37.0 │ 20 Oct 25 13:00 UTC │                     │
	│ ssh     │ cert-options-341854 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ ssh     │ -p cert-options-341854 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ delete  │ -p cert-options-341854                                                                                                                                                                                                                              │ cert-options-341854       │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │ 20 Oct 25 13:01 UTC │
	│ start   │ -p kindnet-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kindnet-126965            │ jenkins │ v1.37.0 │ 20 Oct 25 13:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:01:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:01:11.039442  183769 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:01:11.039576  183769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:11.039588  183769 out.go:374] Setting ErrFile to fd 2...
	I1020 13:01:11.039595  183769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:11.039822  183769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 13:01:11.040314  183769 out.go:368] Setting JSON to false
	I1020 13:01:11.041366  183769 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6206,"bootTime":1760959065,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 13:01:11.041484  183769 start.go:141] virtualization: kvm guest
	I1020 13:01:11.043475  183769 out.go:179] * [kindnet-126965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 13:01:11.044627  183769 notify.go:220] Checking for updates...
	I1020 13:01:11.044638  183769 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:01:11.045700  183769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:01:11.046787  183769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 13:01:11.047795  183769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 13:01:11.048710  183769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 13:01:11.049713  183769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:01:11.051037  183769 config.go:182] Loaded profile config "auto-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051168  183769 config.go:182] Loaded profile config "cert-expiration-324693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051291  183769 config.go:182] Loaded profile config "pause-651808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:11.051360  183769 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:01:11.089387  183769 out.go:179] * Using the kvm2 driver based on user configuration
	I1020 13:01:11.090673  183769 start.go:305] selected driver: kvm2
	I1020 13:01:11.090696  183769 start.go:925] validating driver "kvm2" against <nil>
	I1020 13:01:11.090713  183769 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:01:11.091380  183769 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:01:11.091498  183769 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:01:11.109427  183769 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:01:11.109475  183769 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 13:01:11.124329  183769 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 13:01:11.124393  183769 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:01:11.124682  183769 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:01:11.124709  183769 cni.go:84] Creating CNI manager for "kindnet"
	I1020 13:01:11.124716  183769 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:01:11.124760  183769 start.go:349] cluster config:
	{Name:kindnet-126965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-126965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:01:11.124860  183769 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:01:11.126616  183769 out.go:179] * Starting "kindnet-126965" primary control-plane node in "kindnet-126965" cluster
	I1020 13:01:10.905128  182901 main.go:141] libmachine: (pause-651808) Calling .GetIP
	I1020 13:01:10.909245  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.909774  182901 main.go:141] libmachine: (pause-651808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:94:b3", ip: ""} in network mk-pause-651808: {Iface:virbr1 ExpiryTime:2025-10-20 13:59:45 +0000 UTC Type:0 Mac:52:54:00:15:94:b3 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:pause-651808 Clientid:01:52:54:00:15:94:b3}
	I1020 13:01:10.909805  182901 main.go:141] libmachine: (pause-651808) DBG | domain pause-651808 has defined IP address 192.168.39.100 and MAC address 52:54:00:15:94:b3 in network mk-pause-651808
	I1020 13:01:10.910045  182901 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1020 13:01:10.914963  182901 kubeadm.go:883] updating cluster {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:01:10.915096  182901 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:01:10.915153  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:10.972608  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:10.972637  182901 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:01:10.972692  182901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:01:11.019953  182901 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:01:11.019976  182901 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:01:11.019986  182901 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.34.1 crio true true} ...
	I1020 13:01:11.020097  182901 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-651808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:01:11.020193  182901 ssh_runner.go:195] Run: crio config
	I1020 13:01:11.070657  182901 cni.go:84] Creating CNI manager for ""
	I1020 13:01:11.070683  182901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 13:01:11.070709  182901 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:01:11.070738  182901 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-651808 NodeName:pause-651808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:01:11.070918  182901 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-651808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:01:11.070996  182901 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:01:11.082946  182901 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:01:11.083006  182901 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:01:11.094450  182901 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1020 13:01:11.116763  182901 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:01:11.136130  182901 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:01:11.155609  182901 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1020 13:01:11.159837  182901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:01:11.327134  182901 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:01:11.344329  182901 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808 for IP: 192.168.39.100
	I1020 13:01:11.344359  182901 certs.go:195] generating shared ca certs ...
	I1020 13:01:11.344396  182901 certs.go:227] acquiring lock for ca certs: {Name:mk4d0d22cc1ac40184675be8ad2f5fa8f1c0ffc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:01:11.344590  182901 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key
	I1020 13:01:11.344647  182901 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key
	I1020 13:01:11.344659  182901 certs.go:257] generating profile certs ...
	I1020 13:01:11.344772  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/client.key
	I1020 13:01:11.344842  182901 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key.744b6a91
	I1020 13:01:11.344913  182901 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key
	I1020 13:01:11.345081  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem (1338 bytes)
	W1020 13:01:11.345131  182901 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131_empty.pem, impossibly tiny 0 bytes
	I1020 13:01:11.345148  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:01:11.345193  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/ca.pem (1082 bytes)
	I1020 13:01:11.345230  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:01:11.345286  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/certs/key.pem (1675 bytes)
	I1020 13:01:11.345351  182901 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem (1708 bytes)
	I1020 13:01:11.346433  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:01:11.377395  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1020 13:01:11.405728  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:01:11.439143  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:01:11.472697  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 13:01:11.501296  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1020 13:01:11.533783  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:01:11.567494  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/pause-651808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:01:11.605276  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/ssl/certs/1431312.pem --> /usr/share/ca-certificates/1431312.pem (1708 bytes)
	I1020 13:01:11.644228  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:01:11.676093  182901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-139101/.minikube/certs/143131.pem --> /usr/share/ca-certificates/143131.pem (1338 bytes)
	I1020 13:01:11.706583  182901 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:01:11.737851  182901 ssh_runner.go:195] Run: openssl version
	I1020 13:01:11.753072  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1431312.pem && ln -fs /usr/share/ca-certificates/1431312.pem /etc/ssl/certs/1431312.pem"
	I1020 13:01:11.826314  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836671  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:06 /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.836752  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1431312.pem
	I1020 13:01:11.855720  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1431312.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:01:11.880887  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:01:11.909865  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925033  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 11:57 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.925110  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:01:11.955267  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:01:11.983925  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143131.pem && ln -fs /usr/share/ca-certificates/143131.pem /etc/ssl/certs/143131.pem"
	I1020 13:01:12.014906  182901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032542  182901 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:06 /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.032633  182901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143131.pem
	I1020 13:01:12.044972  182901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143131.pem /etc/ssl/certs/51391683.0"
	I1020 13:01:12.062846  182901 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:01:12.077800  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:01:12.087925  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:01:12.106951  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:01:12.123664  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:01:12.137647  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:01:12.150444  182901 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:01:12.172138  182901 kubeadm.go:400] StartCluster: {Name:pause-651808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-651808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:01:12.172274  182901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:01:12.172330  182901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:01:12.271884  182901 cri.go:89] found id: "f937159c10983cdf27318cffae7d70f957ae835c092ca5849879c2619ee075de"
	I1020 13:01:12.271915  182901 cri.go:89] found id: "cfaadcf1551adbd8ed1e3dfd9da76fe0bf403087f26aa5cf6fae83fea3eafb96"
	I1020 13:01:12.271921  182901 cri.go:89] found id: "e76b8e65ba0906eb8da1569ec66689093a4a02e97ce048d85aa98c125eb4353c"
	I1020 13:01:12.271925  182901 cri.go:89] found id: "a68b93f2f906643bafbdb6fe76f0d8c7fd1823da764312dd89c22dd8a0d42046"
	I1020 13:01:12.271927  182901 cri.go:89] found id: "7e2570bfa23e4f5344178f25877a37320166b734c37a9a83350a47ffe1d65eb3"
	I1020 13:01:12.271930  182901 cri.go:89] found id: "7b266597b87884d5f7a2198db73243281c52eab027e66a27b7f9ae6f68d392cf"
	I1020 13:01:12.271934  182901 cri.go:89] found id: "e7d8bd7d83a224579b802ce9e37a6d39724a87bfac159c11737374d5fa62fa42"
	I1020 13:01:12.271938  182901 cri.go:89] found id: "d193e3dc4d67d1276b3990e16a09896ace1e959881335e7ff9c8bf0d0cb43d39"
	I1020 13:01:12.271942  182901 cri.go:89] found id: "94fbfa6f3eb3ec5373543cd11cf8056bf549891df1ca276ab84c697f66e9971c"
	I1020 13:01:12.271952  182901 cri.go:89] found id: "4bcb74cbb86c7341504a72c9e04f508de706ced1c2a9c7d051263f6216b8f3ca"
	I1020 13:01:12.271957  182901 cri.go:89] found id: ""
	I1020 13:01:12.272016  182901 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-651808 -n pause-651808
helpers_test.go:269: (dbg) Run:  kubectl --context pause-651808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (73.29s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.85
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 13.5
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 112.93
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 205.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.54
35 TestAddons/parallel/Registry 16.81
36 TestAddons/parallel/RegistryCreds 0.77
38 TestAddons/parallel/InspektorGadget 6.38
39 TestAddons/parallel/MetricsServer 5.9
41 TestAddons/parallel/CSI 55.42
42 TestAddons/parallel/Headlamp 20.91
43 TestAddons/parallel/CloudSpanner 5.6
44 TestAddons/parallel/LocalPath 56.85
45 TestAddons/parallel/NvidiaDevicePlugin 6.58
46 TestAddons/parallel/Yakd 11.82
48 TestAddons/StoppedEnableDisable 88.66
49 TestCertOptions 83.32
50 TestCertExpiration 300.99
52 TestForceSystemdFlag 81.79
53 TestForceSystemdEnv 44.11
55 TestKVMDriverInstallOrUpdate 1.09
59 TestErrorSpam/setup 38.32
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.76
63 TestErrorSpam/unpause 1.96
64 TestErrorSpam/stop 5.45
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.29
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 26.89
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
76 TestFunctional/serial/CacheCmd/cache/add_local 2.33
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 32.1
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.46
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.5
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 19.09
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.85
98 TestFunctional/parallel/ServiceCmdConnect 21.51
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 44.86
102 TestFunctional/parallel/SSHCmd 0.44
103 TestFunctional/parallel/CpCmd 1.37
104 TestFunctional/parallel/MySQL 22.78
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.37
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.51
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.75
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.43
122 TestFunctional/parallel/ImageCommands/Setup 1.95
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.04
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.43
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.14
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
142 TestFunctional/parallel/ServiceCmd/DeployApp 15.18
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
144 TestFunctional/parallel/ProfileCmd/profile_list 0.37
145 TestFunctional/parallel/MountCmd/any-port 9.74
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
147 TestFunctional/parallel/ServiceCmd/List 1.28
148 TestFunctional/parallel/ServiceCmd/JSONOutput 1.33
149 TestFunctional/parallel/MountCmd/specific-port 2.01
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
151 TestFunctional/parallel/ServiceCmd/Format 0.33
152 TestFunctional/parallel/ServiceCmd/URL 0.31
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 199.88
162 TestMultiControlPlane/serial/DeployApp 7.36
163 TestMultiControlPlane/serial/PingHostFromPods 1.2
164 TestMultiControlPlane/serial/AddWorkerNode 44.26
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
167 TestMultiControlPlane/serial/CopyFile 13.27
168 TestMultiControlPlane/serial/StopSecondaryNode 76.07
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.19
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.11
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 502.25
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.46
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
175 TestMultiControlPlane/serial/StopCluster 262.23
176 TestMultiControlPlane/serial/RestartCluster 123.79
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 69.35
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
183 TestJSONOutput/start/Command 76.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.72
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.42
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 77.79
215 TestMountStart/serial/StartWithMountFirst 22.05
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 20.59
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.7
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 19.54
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 99.22
227 TestMultiNode/serial/DeployApp2Nodes 5.73
228 TestMultiNode/serial/PingHostFrom2Pods 0.82
229 TestMultiNode/serial/AddNode 62.41
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.57
232 TestMultiNode/serial/CopyFile 7.35
233 TestMultiNode/serial/StopNode 2.59
234 TestMultiNode/serial/StartAfterStop 133.84
235 TestMultiNode/serial/RestartKeepsNodes 312.69
236 TestMultiNode/serial/DeleteNode 2.87
237 TestMultiNode/serial/StopMultiNode 170.19
238 TestMultiNode/serial/RestartMultiNode 86.76
239 TestMultiNode/serial/ValidateNameConflict 43.16
246 TestScheduledStopUnix 111.08
250 TestRunningBinaryUpgrade 112.87
252 TestKubernetesUpgrade 192.14
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 79.17
264 TestNetworkPlugins/group/false 3.32
268 TestStoppedBinaryUpgrade/Setup 2.56
269 TestStoppedBinaryUpgrade/Upgrade 149.3
270 TestNoKubernetes/serial/StartWithStopK8s 29.97
271 TestNoKubernetes/serial/Start 36.18
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
273 TestNoKubernetes/serial/ProfileList 1.19
274 TestNoKubernetes/serial/Stop 1.4
275 TestNoKubernetes/serial/StartNoArgs 57.1
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
285 TestPause/serial/Start 64.27
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
288 TestNetworkPlugins/group/auto/Start 90.62
289 TestNetworkPlugins/group/kindnet/Start 97.59
290 TestNetworkPlugins/group/calico/Start 74.24
291 TestNetworkPlugins/group/auto/KubeletFlags 0.22
292 TestNetworkPlugins/group/auto/NetCatPod 9.27
293 TestNetworkPlugins/group/auto/DNS 0.24
294 TestNetworkPlugins/group/auto/Localhost 0.14
295 TestNetworkPlugins/group/auto/HairPin 0.15
296 TestNetworkPlugins/group/custom-flannel/Start 114.69
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/calico/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
300 TestNetworkPlugins/group/kindnet/NetCatPod 12.23
301 TestNetworkPlugins/group/calico/KubeletFlags 0.32
302 TestNetworkPlugins/group/calico/NetCatPod 10.31
303 TestNetworkPlugins/group/kindnet/DNS 0.19
304 TestNetworkPlugins/group/kindnet/Localhost 0.14
305 TestNetworkPlugins/group/kindnet/HairPin 0.12
306 TestNetworkPlugins/group/calico/DNS 0.16
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.16
309 TestNetworkPlugins/group/enable-default-cni/Start 85.01
310 TestNetworkPlugins/group/flannel/Start 99.22
311 TestNetworkPlugins/group/bridge/Start 83.45
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.35
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12
316 TestNetworkPlugins/group/custom-flannel/DNS 0.19
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestStartStop/group/old-k8s-version/serial/FirstStart 96.16
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
326 TestNetworkPlugins/group/flannel/NetCatPod 11.23
328 TestStartStop/group/no-preload/serial/FirstStart 113
329 TestNetworkPlugins/group/flannel/DNS 0.17
330 TestNetworkPlugins/group/flannel/Localhost 0.12
331 TestNetworkPlugins/group/flannel/HairPin 0.12
333 TestStartStop/group/embed-certs/serial/FirstStart 89.15
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
335 TestNetworkPlugins/group/bridge/NetCatPod 11.23
336 TestNetworkPlugins/group/bridge/DNS 0.18
337 TestNetworkPlugins/group/bridge/Localhost 0.16
338 TestNetworkPlugins/group/bridge/HairPin 0.15
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.8
341 TestStartStop/group/old-k8s-version/serial/DeployApp 12.38
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
343 TestStartStop/group/old-k8s-version/serial/Stop 88.91
344 TestStartStop/group/no-preload/serial/DeployApp 11.28
345 TestStartStop/group/embed-certs/serial/DeployApp 11.3
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
348 TestStartStop/group/no-preload/serial/Stop 82.98
349 TestStartStop/group/embed-certs/serial/Stop 83.89
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.31
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/old-k8s-version/serial/SecondStart 45.27
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/no-preload/serial/SecondStart 59.75
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/embed-certs/serial/SecondStart 60.68
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.12
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
364 TestStartStop/group/old-k8s-version/serial/Pause 3.62
366 TestStartStop/group/newest-cni/serial/FirstStart 55.75
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.01
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
371 TestStartStop/group/no-preload/serial/Pause 3.78
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
375 TestStartStop/group/embed-certs/serial/Pause 2.86
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
381 TestStartStop/group/newest-cni/serial/Stop 10.67
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/newest-cni/serial/SecondStart 32.58
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
387 TestStartStop/group/newest-cni/serial/Pause 2.54
x
+
TestDownloadOnly/v1.28.0/json-events (23.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-190516 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-190516 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.847650092s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1020 11:57:06.717550  143131 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1020 11:57:06.717695  143131 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-190516
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-190516: exit status 85 (65.862083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-190516 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-190516 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:56:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:56:42.914480  143144 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:56:42.914758  143144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:42.914769  143144 out.go:374] Setting ErrFile to fd 2...
	I1020 11:56:42.914774  143144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:56:42.914959  143144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	W1020 11:56:42.915091  143144 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21773-139101/.minikube/config/config.json: open /home/jenkins/minikube-integration/21773-139101/.minikube/config/config.json: no such file or directory
	I1020 11:56:42.915673  143144 out.go:368] Setting JSON to true
	I1020 11:56:42.917425  143144 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2338,"bootTime":1760959065,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:56:42.917525  143144 start.go:141] virtualization: kvm guest
	I1020 11:56:42.919388  143144 out.go:99] [download-only-190516] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1020 11:56:42.919524  143144 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball: no such file or directory
	I1020 11:56:42.919574  143144 notify.go:220] Checking for updates...
	I1020 11:56:42.921010  143144 out.go:171] MINIKUBE_LOCATION=21773
	I1020 11:56:42.922301  143144 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:56:42.923592  143144 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 11:56:42.924634  143144 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 11:56:42.925627  143144 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1020 11:56:42.930783  143144 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 11:56:42.931091  143144 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:56:43.403833  143144 out.go:99] Using the kvm2 driver based on user configuration
	I1020 11:56:43.403877  143144 start.go:305] selected driver: kvm2
	I1020 11:56:43.403884  143144 start.go:925] validating driver "kvm2" against <nil>
	I1020 11:56:43.404251  143144 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:56:43.404388  143144 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:56:43.420531  143144 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:56:43.420564  143144 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:56:43.433977  143144 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:56:43.434023  143144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:56:43.434687  143144 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1020 11:56:43.434924  143144 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 11:56:43.434961  143144 cni.go:84] Creating CNI manager for ""
	I1020 11:56:43.435037  143144 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 11:56:43.435053  143144 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1020 11:56:43.435145  143144 start.go:349] cluster config:
	{Name:download-only-190516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-190516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:56:43.435396  143144 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:56:43.438057  143144 out.go:99] Downloading VM boot image ...
	I1020 11:56:43.438129  143144 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21773-139101/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1020 11:56:54.135668  143144 out.go:99] Starting "download-only-190516" primary control-plane node in "download-only-190516" cluster
	I1020 11:56:54.135700  143144 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 11:56:54.240677  143144 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1020 11:56:54.240725  143144 cache.go:58] Caching tarball of preloaded images
	I1020 11:56:54.240912  143144 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 11:56:54.242576  143144 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1020 11:56:54.242592  143144 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1020 11:56:54.356835  143144 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1020 11:56:54.356954  143144 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-190516 host does not exist
	  To start a cluster, run: "minikube start -p download-only-190516"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-190516
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-070035 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-070035 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (13.504235249s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1020 11:57:20.577738  143131 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1020 11:57:20.577788  143131 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-070035
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-070035: exit status 85 (62.574518ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-190516 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-190516 │ jenkins │ v1.37.0 │ 20 Oct 25 11:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │ 20 Oct 25 11:57 UTC │
	│ delete  │ -p download-only-190516                                                                                                                                                                             │ download-only-190516 │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │ 20 Oct 25 11:57 UTC │
	│ start   │ -o=json --download-only -p download-only-070035 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-070035 │ jenkins │ v1.37.0 │ 20 Oct 25 11:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 11:57:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 11:57:07.119036  143412 out.go:360] Setting OutFile to fd 1 ...
	I1020 11:57:07.119342  143412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:57:07.119354  143412 out.go:374] Setting ErrFile to fd 2...
	I1020 11:57:07.119358  143412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 11:57:07.119564  143412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 11:57:07.120048  143412 out.go:368] Setting JSON to true
	I1020 11:57:07.121081  143412 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2362,"bootTime":1760959065,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 11:57:07.121183  143412 start.go:141] virtualization: kvm guest
	I1020 11:57:07.123009  143412 out.go:99] [download-only-070035] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 11:57:07.123169  143412 notify.go:220] Checking for updates...
	I1020 11:57:07.124446  143412 out.go:171] MINIKUBE_LOCATION=21773
	I1020 11:57:07.125829  143412 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 11:57:07.127125  143412 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 11:57:07.128265  143412 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 11:57:07.129267  143412 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1020 11:57:07.131551  143412 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 11:57:07.131802  143412 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 11:57:07.166323  143412 out.go:99] Using the kvm2 driver based on user configuration
	I1020 11:57:07.166367  143412 start.go:305] selected driver: kvm2
	I1020 11:57:07.166376  143412 start.go:925] validating driver "kvm2" against <nil>
	I1020 11:57:07.166800  143412 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:57:07.166887  143412 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:57:07.182120  143412 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:57:07.182153  143412 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21773-139101/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1020 11:57:07.197132  143412 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1020 11:57:07.197185  143412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 11:57:07.197961  143412 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1020 11:57:07.198242  143412 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 11:57:07.198274  143412 cni.go:84] Creating CNI manager for ""
	I1020 11:57:07.198339  143412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1020 11:57:07.198351  143412 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1020 11:57:07.198432  143412 start.go:349] cluster config:
	{Name:download-only-070035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-070035 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 11:57:07.198575  143412 iso.go:125] acquiring lock: {Name:mkd67d5e4d53c86a118fdead81d797bfefc14d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 11:57:07.200208  143412 out.go:99] Starting "download-only-070035" primary control-plane node in "download-only-070035" cluster
	I1020 11:57:07.200231  143412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:57:07.304964  143412 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1020 11:57:07.304998  143412 cache.go:58] Caching tarball of preloaded images
	I1020 11:57:07.305185  143412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 11:57:07.306905  143412 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1020 11:57:07.306930  143412 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1020 11:57:07.420355  143412 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1020 11:57:07.420442  143412 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21773-139101/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-070035 host does not exist
	  To start a cluster, run: "minikube start -p download-only-070035"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-070035
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1020 11:57:21.182050  143131 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-169246 --alsologtostderr --binary-mirror http://127.0.0.1:39247 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-169246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-169246
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (112.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-488144 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-488144 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m51.987073103s)
helpers_test.go:175: Cleaning up "offline-crio-488144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-488144
--- PASS: TestOffline (112.93s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323619
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-323619: exit status 85 (55.20016ms)

                                                
                                                
-- stdout --
	* Profile "addons-323619" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323619"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323619
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-323619: exit status 85 (54.837048ms)

                                                
                                                
-- stdout --
	* Profile "addons-323619" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323619"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (205.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-323619 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-323619 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m25.364882916s)
--- PASS: TestAddons/Setup (205.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-323619 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-323619 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-323619 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-323619 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f01822ea-7da0-4ac7-a696-823399920504] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f01822ea-7da0-4ac7-a696-823399920504] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003789523s
addons_test.go:694: (dbg) Run:  kubectl --context addons-323619 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-323619 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-323619 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.305819ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ztdx9" [e0f3051e-f382-4b73-bc54-fc3e72c133dc] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.097359168s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-d9xww" [7621b093-dc68-4763-8bf1-6acf5e291d3d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009817294s
addons_test.go:392: (dbg) Run:  kubectl --context addons-323619 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-323619 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-323619 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.865505343s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 ip
2025/10/20 12:01:23 [DEBUG] GET http://192.168.39.233:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.81s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.008217ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-323619
addons_test.go:332: (dbg) Run:  kubectl --context addons-323619 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mmzsg" [e208c3aa-f9c8-4fc1-b8c3-ba4f9c68dbdf] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.008063469s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.280525ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-p578g" [efbbc581-2f4c-4fce-bdc8-f1da295b4b7e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.097510367s
addons_test.go:463: (dbg) Run:  kubectl --context addons-323619 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1020 12:01:26.136910  143131 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1020 12:01:26.143743  143131 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1020 12:01:26.143777  143131 kapi.go:107] duration metric: took 6.873054ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.885516ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-323619 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-323619 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3f1d5f12-4a1b-4b00-8721-7859d2e6ae84] Pending
helpers_test.go:352: "task-pv-pod" [3f1d5f12-4a1b-4b00-8721-7859d2e6ae84] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3f1d5f12-4a1b-4b00-8721-7859d2e6ae84] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.005011817s
addons_test.go:572: (dbg) Run:  kubectl --context addons-323619 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323619 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323619 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-323619 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-323619 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-323619 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-323619 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a137790d-e8ef-4cc5-b633-a54dd612c522] Pending
helpers_test.go:352: "task-pv-pod-restore" [a137790d-e8ef-4cc5-b633-a54dd612c522] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a137790d-e8ef-4cc5-b633-a54dd612c522] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004081772s
addons_test.go:614: (dbg) Run:  kubectl --context addons-323619 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-323619 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-323619 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.835202974s)
--- PASS: TestAddons/parallel/CSI (55.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-323619 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vx44v" [487e0fa2-9394-4982-a8dc-9d1baf601296] Pending
helpers_test.go:352: "headlamp-6945c6f4d-vx44v" [487e0fa2-9394-4982-a8dc-9d1baf601296] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vx44v" [487e0fa2-9394-4982-a8dc-9d1baf601296] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vx44v" [487e0fa2-9394-4982-a8dc-9d1baf601296] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.007139736s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable headlamp --alsologtostderr -v=1: (5.950399515s)
--- PASS: TestAddons/parallel/Headlamp (20.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-c44bn" [aa83f150-70dd-42ac-8021-49447a973dfe] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004723891s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-323619 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-323619 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5924bdb5-2589-429e-8f36-eae2f725fddb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5924bdb5-2589-429e-8f36-eae2f725fddb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5924bdb5-2589-429e-8f36-eae2f725fddb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004255178s
addons_test.go:967: (dbg) Run:  kubectl --context addons-323619 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 ssh "cat /opt/local-path-provisioner/pvc-46c0c8c7-7629-4c0a-b0ce-cef91ed80b06_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-323619 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-323619 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.00383733s)
--- PASS: TestAddons/parallel/LocalPath (56.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8bl6k" [f8e8140e-4e5f-4f90-ab0a-d58e0f710081] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004740008s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tmmwt" [4dc2ef70-9024-408e-abc9-a84554e33ac7] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003947469s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323619 addons disable yakd --alsologtostderr -v=1: (5.814482953s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-323619
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-323619: (1m28.346049608s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323619
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323619
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-323619
--- PASS: TestAddons/StoppedEnableDisable (88.66s)

                                                
                                    
x
+
TestCertOptions (83.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-341854 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-341854 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.842130045s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-341854 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-341854 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-341854 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-341854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-341854
--- PASS: TestCertOptions (83.32s)

                                                
                                    
x
+
TestCertExpiration (300.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-324693 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-324693 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.459746062s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-324693 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 13:03:26.846218  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-324693 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.636272266s)
helpers_test.go:175: Cleaning up "cert-expiration-324693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-324693
--- PASS: TestCertExpiration (300.99s)

                                                
                                    
x
+
TestForceSystemdFlag (81.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-850858 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-850858 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.68520765s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-850858 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
E1020 13:00:47.920024  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "force-systemd-flag-850858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-850858
--- PASS: TestForceSystemdFlag (81.79s)

                                                
                                    
x
+
TestForceSystemdEnv (44.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-533981 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-533981 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.074426554s)
helpers_test.go:175: Cleaning up "force-systemd-env-533981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-533981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-533981: (1.031742223s)
--- PASS: TestForceSystemdEnv (44.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1020 12:59:46.708813  143131 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1020 12:59:46.708937  143131 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2257993613/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1020 12:59:46.737600  143131 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2257993613/001/docker-machine-driver-kvm2 version is 1.1.1
W1020 12:59:46.737644  143131 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1020 12:59:46.737803  143131 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1020 12:59:46.737858  143131 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2257993613/001/docker-machine-driver-kvm2
I1020 12:59:47.655532  143131 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2257993613/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1020 12:59:47.671219  143131 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2257993613/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.09s)

                                                
                                    
x
+
TestErrorSpam/setup (38.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-928882 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-928882 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:05:47.920549  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:47.929332  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:47.940966  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:47.962436  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:48.003927  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:48.085446  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:48.247115  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:48.568953  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:49.210749  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:50.492485  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:05:53.055511  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-928882 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-928882 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.319102107s)
--- PASS: TestErrorSpam/setup (38.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 pause
E1020 12:05:58.177638  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (5.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop: (2.022591283s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop: (1.876785532s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-928882 --log_dir /tmp/nospam-928882 stop: (1.550305372s)
--- PASS: TestErrorSpam/stop (5.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21773-139101/.minikube/files/etc/test/nested/copy/143131/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:06:08.419620  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:06:28.901797  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:07:09.864157  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-732631 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.292077989s)
--- PASS: TestFunctional/serial/StartWithProxy (80.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1020 12:07:28.083116  143131 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-732631 --alsologtostderr -v=8: (26.889639023s)
functional_test.go:678: soft start took 26.890317425s for "functional-732631" cluster.
I1020 12:07:54.973155  143131 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (26.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-732631 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:3.1: (1.132618526s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:3.3: (1.177877737s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 cache add registry.k8s.io/pause:latest: (1.154219763s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-732631 /tmp/TestFunctionalserialCacheCmdcacheadd_local624823773/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache add minikube-local-cache-test:functional-732631
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 cache add minikube-local-cache-test:functional-732631: (1.984008604s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache delete minikube-local-cache-test:functional-732631
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-732631
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (243.686042ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 cache reload: (1.065132981s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 kubectl -- --context functional-732631 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-732631 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1020 12:08:31.787592  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-732631 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.095652654s)
functional_test.go:776: restart took 32.095784354s for "functional-732631" cluster.
I1020 12:08:35.549481  143131 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-732631 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 logs: (1.460699481s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 logs --file /tmp/TestFunctionalserialLogsFileCmd2527102115/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 logs --file /tmp/TestFunctionalserialLogsFileCmd2527102115/001/logs.txt: (1.477703332s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-732631 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-732631
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-732631: exit status 115 (314.586493ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.52:30656 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-732631 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 config get cpus: exit status 14 (63.662063ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 config get cpus: exit status 14 (58.666206ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-732631 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-732631 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 151524: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-732631 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (187.061766ms)

                                                
                                                
-- stdout --
	* [functional-732631] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:09:08.501886  151418 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:09:08.501988  151418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:09:08.501993  151418 out.go:374] Setting ErrFile to fd 2...
	I1020 12:09:08.501997  151418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:09:08.502214  151418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:09:08.502763  151418 out.go:368] Setting JSON to false
	I1020 12:09:08.503781  151418 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3083,"bootTime":1760959065,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:09:08.503897  151418 start.go:141] virtualization: kvm guest
	I1020 12:09:08.549513  151418 out.go:179] * [functional-732631] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:09:08.551306  151418 notify.go:220] Checking for updates...
	I1020 12:09:08.551365  151418 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:09:08.552787  151418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:09:08.554099  151418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:09:08.555772  151418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 12:09:08.556980  151418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:09:08.558081  151418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:09:08.559793  151418 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:09:08.560357  151418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:09:08.560495  151418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:09:08.576492  151418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I1020 12:09:08.577067  151418 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:09:08.577703  151418 main.go:141] libmachine: Using API Version  1
	I1020 12:09:08.577744  151418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:09:08.578156  151418 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:09:08.578354  151418 main.go:141] libmachine: (functional-732631) Calling .DriverName
	I1020 12:09:08.578609  151418 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:09:08.578946  151418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:09:08.578987  151418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:09:08.593469  151418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42497
	I1020 12:09:08.594042  151418 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:09:08.594606  151418 main.go:141] libmachine: Using API Version  1
	I1020 12:09:08.594633  151418 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:09:08.595024  151418 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:09:08.595247  151418 main.go:141] libmachine: (functional-732631) Calling .DriverName
	I1020 12:09:08.629382  151418 out.go:179] * Using the kvm2 driver based on existing profile
	I1020 12:09:08.630608  151418 start.go:305] selected driver: kvm2
	I1020 12:09:08.630625  151418 start.go:925] validating driver "kvm2" against &{Name:functional-732631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-732631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:09:08.630734  151418 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:09:08.632721  151418 out.go:203] 
	W1020 12:09:08.633899  151418 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1020 12:09:08.635202  151418 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732631 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-732631 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (150.364994ms)

                                                
                                                
-- stdout --
	* [functional-732631] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:09:08.352873  151380 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:09:08.353004  151380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:09:08.353017  151380 out.go:374] Setting ErrFile to fd 2...
	I1020 12:09:08.353023  151380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:09:08.353450  151380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:09:08.353968  151380 out.go:368] Setting JSON to false
	I1020 12:09:08.355059  151380 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3083,"bootTime":1760959065,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:09:08.355187  151380 start.go:141] virtualization: kvm guest
	I1020 12:09:08.357249  151380 out.go:179] * [functional-732631] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1020 12:09:08.358505  151380 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:09:08.358539  151380 notify.go:220] Checking for updates...
	I1020 12:09:08.360810  151380 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:09:08.361987  151380 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:09:08.363258  151380 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 12:09:08.364421  151380 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:09:08.365566  151380 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:09:08.367147  151380 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:09:08.367702  151380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:09:08.367782  151380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:09:08.383996  151380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I1020 12:09:08.384524  151380 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:09:08.385015  151380 main.go:141] libmachine: Using API Version  1
	I1020 12:09:08.385037  151380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:09:08.385498  151380 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:09:08.385736  151380 main.go:141] libmachine: (functional-732631) Calling .DriverName
	I1020 12:09:08.386054  151380 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:09:08.386551  151380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:09:08.386634  151380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:09:08.403129  151380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I1020 12:09:08.403673  151380 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:09:08.404112  151380 main.go:141] libmachine: Using API Version  1
	I1020 12:09:08.404135  151380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:09:08.404541  151380 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:09:08.404752  151380 main.go:141] libmachine: (functional-732631) Calling .DriverName
	I1020 12:09:08.441272  151380 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1020 12:09:08.442510  151380 start.go:305] selected driver: kvm2
	I1020 12:09:08.442529  151380 start.go:925] validating driver "kvm2" against &{Name:functional-732631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-732631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:09:08.442711  151380 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:09:08.445254  151380 out.go:203] 
	W1020 12:09:08.446391  151380 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1020 12:09:08.447439  151380 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-732631 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-732631 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2m2m8" [1110d42f-7ac8-45a1-84c4-f7943afc84cf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2m2m8" [1110d42f-7ac8-45a1-84c4-f7943afc84cf] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.004042456s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.52:31847
functional_test.go:1680: http://192.168.39.52:31847: success! body:
Request served by hello-node-connect-7d85dfc575-2m2m8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.52:31847
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fc8129ca-e5b2-4910-9cc8-9924312bfbf3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00366886s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-732631 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-732631 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-732631 get pvc myclaim -o=json
I1020 12:08:50.527504  143131 retry.go:31] will retry after 2.960227624s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f5ec2520-7f15-441a-9862-c12ed2444cb6 ResourceVersion:739 Generation:0 CreationTimestamp:2025-10-20 12:08:50 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017703f0 VolumeMode:0xc001770400 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-732631 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-732631 apply -f testdata/storage-provisioner/pod.yaml
I1020 12:08:53.709393  143131 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8e33670c-df58-4fee-a641-a5443d6f8c2d] Pending
helpers_test.go:352: "sp-pod" [8e33670c-df58-4fee-a641-a5443d6f8c2d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8e33670c-df58-4fee-a641-a5443d6f8c2d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.006616451s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-732631 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-732631 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-732631 apply -f testdata/storage-provisioner/pod.yaml
I1020 12:09:13.950365  143131 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a421fb77-4205-4c7e-a4ce-60b9a28b227b] Pending
helpers_test.go:352: "sp-pod" [a421fb77-4205-4c7e-a4ce-60b9a28b227b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a421fb77-4205-4c7e-a4ce-60b9a28b227b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004817651s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-732631 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh -n functional-732631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cp functional-732631:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2162117550/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh -n functional-732631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh -n functional-732631 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-732631 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-mm2lc" [10d9b650-ebe4-474b-a435-5fb0db021aa6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-mm2lc" [10d9b650-ebe4-474b-a435-5fb0db021aa6] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.294614675s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-732631 exec mysql-5bb876957f-mm2lc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-732631 exec mysql-5bb876957f-mm2lc -- mysql -ppassword -e "show databases;": exit status 1 (549.300763ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1020 12:09:03.614963  143131 retry.go:31] will retry after 1.213295775s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-732631 exec mysql-5bb876957f-mm2lc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-732631 exec mysql-5bb876957f-mm2lc -- mysql -ppassword -e "show databases;": exit status 1 (140.546422ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1020 12:09:04.969958  143131 retry.go:31] will retry after 1.255102365s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-732631 exec mysql-5bb876957f-mm2lc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/143131/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /etc/test/nested/copy/143131/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/143131.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /etc/ssl/certs/143131.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/143131.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /usr/share/ca-certificates/143131.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1431312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /etc/ssl/certs/1431312.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1431312.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /usr/share/ca-certificates/1431312.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-732631 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "sudo systemctl is-active docker": exit status 1 (238.016952ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "sudo systemctl is-active containerd": exit status 1 (221.346922ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732631 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-732631
localhost/kicbase/echo-server:functional-732631
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732631 image ls --format short --alsologtostderr:
I1020 12:09:19.371395  152210 out.go:360] Setting OutFile to fd 1 ...
I1020 12:09:19.371706  152210 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:19.371716  152210 out.go:374] Setting ErrFile to fd 2...
I1020 12:09:19.371720  152210 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:19.371930  152210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
I1020 12:09:19.372517  152210 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:19.372605  152210 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:19.372962  152210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:19.373028  152210 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:19.387671  152210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
I1020 12:09:19.388208  152210 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:19.388834  152210 main.go:141] libmachine: Using API Version  1
I1020 12:09:19.388870  152210 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:19.389290  152210 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:19.389542  152210 main.go:141] libmachine: (functional-732631) Calling .GetState
I1020 12:09:19.391929  152210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:19.391994  152210 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:19.407034  152210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
I1020 12:09:19.407499  152210 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:19.408038  152210 main.go:141] libmachine: Using API Version  1
I1020 12:09:19.408068  152210 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:19.408505  152210 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:19.408721  152210 main.go:141] libmachine: (functional-732631) Calling .DriverName
I1020 12:09:19.408942  152210 ssh_runner.go:195] Run: systemctl --version
I1020 12:09:19.408968  152210 main.go:141] libmachine: (functional-732631) Calling .GetSSHHostname
I1020 12:09:19.412646  152210 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:19.413129  152210 main.go:141] libmachine: (functional-732631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:f0:ea", ip: ""} in network mk-functional-732631: {Iface:virbr1 ExpiryTime:2025-10-20 13:06:22 +0000 UTC Type:0 Mac:52:54:00:fa:f0:ea Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-732631 Clientid:01:52:54:00:fa:f0:ea}
I1020 12:09:19.413160  152210 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined IP address 192.168.39.52 and MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:19.413373  152210 main.go:141] libmachine: (functional-732631) Calling .GetSSHPort
I1020 12:09:19.413580  152210 main.go:141] libmachine: (functional-732631) Calling .GetSSHKeyPath
I1020 12:09:19.413772  152210 main.go:141] libmachine: (functional-732631) Calling .GetSSHUsername
I1020 12:09:19.413910  152210 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/functional-732631/id_rsa Username:docker}
I1020 12:09:19.513789  152210 ssh_runner.go:195] Run: sudo crictl images --output json
I1020 12:09:19.593614  152210 main.go:141] libmachine: Making call to close driver server
I1020 12:09:19.593631  152210 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:19.593944  152210 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:19.593964  152210 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:19.593973  152210 main.go:141] libmachine: Making call to close driver server
I1020 12:09:19.593979  152210 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:19.594270  152210 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:19.594285  152210 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732631 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-732631  │ 0507ec6b7d5ff │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-732631  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732631 image ls --format table --alsologtostderr:
I1020 12:09:20.607713  152432 out.go:360] Setting OutFile to fd 1 ...
I1020 12:09:20.608182  152432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.608199  152432 out.go:374] Setting ErrFile to fd 2...
I1020 12:09:20.608207  152432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.608556  152432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
I1020 12:09:20.609565  152432 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.609730  152432 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.610470  152432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.610570  152432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.625531  152432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
I1020 12:09:20.626077  152432 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.626709  152432 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.626737  152432 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.627165  152432 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.627374  152432 main.go:141] libmachine: (functional-732631) Calling .GetState
I1020 12:09:20.629510  152432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.629563  152432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.644448  152432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
I1020 12:09:20.645055  152432 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.645640  152432 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.645671  152432 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.646042  152432 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.646292  152432 main.go:141] libmachine: (functional-732631) Calling .DriverName
I1020 12:09:20.646534  152432 ssh_runner.go:195] Run: systemctl --version
I1020 12:09:20.646575  152432 main.go:141] libmachine: (functional-732631) Calling .GetSSHHostname
I1020 12:09:20.650193  152432 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.650685  152432 main.go:141] libmachine: (functional-732631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:f0:ea", ip: ""} in network mk-functional-732631: {Iface:virbr1 ExpiryTime:2025-10-20 13:06:22 +0000 UTC Type:0 Mac:52:54:00:fa:f0:ea Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-732631 Clientid:01:52:54:00:fa:f0:ea}
I1020 12:09:20.650713  152432 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined IP address 192.168.39.52 and MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.650906  152432 main.go:141] libmachine: (functional-732631) Calling .GetSSHPort
I1020 12:09:20.651073  152432 main.go:141] libmachine: (functional-732631) Calling .GetSSHKeyPath
I1020 12:09:20.651185  152432 main.go:141] libmachine: (functional-732631) Calling .GetSSHUsername
I1020 12:09:20.651374  152432 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/functional-732631/id_rsa Username:docker}
I1020 12:09:20.777967  152432 ssh_runner.go:195] Run: sudo crictl images --output json
I1020 12:09:20.834497  152432 main.go:141] libmachine: Making call to close driver server
I1020 12:09:20.834522  152432 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:20.834892  152432 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:20.834917  152432 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:20.834927  152432 main.go:141] libmachine: Making call to close driver server
I1020 12:09:20.834935  152432 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:20.834937  152432 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:20.835215  152432 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:20.835265  152432 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:20.835276  152432 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732631 image ls --format json --alsologtostderr:
[{"id":"0507ec6b7d5ff64384aafe9b9bf7f3ff018fc1320735b0612169ff9918b8241b","repoDigests":["localhost/minikube-local-cache-test@sha256:ab5f790897f3cfd2b5c94b2b290bf353ee3a4118cac4219ef6272e7ce859bf41"],"repoTags":["localhost/minikube-local-cache-test:functional-732631"],"size":"3330"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd
379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-con
troller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1
e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
"gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry
.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-
server:latest","localhost/kicbase/echo-server:functional-732631"],"size":"4943877"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732631 image ls --format json --alsologtostderr:
I1020 12:09:20.306727  152408 out.go:360] Setting OutFile to fd 1 ...
I1020 12:09:20.307013  152408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.307025  152408 out.go:374] Setting ErrFile to fd 2...
I1020 12:09:20.307030  152408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.307332  152408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
I1020 12:09:20.308243  152408 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.308430  152408 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.309059  152408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.309163  152408 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.325505  152408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37251
I1020 12:09:20.326074  152408 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.326671  152408 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.326695  152408 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.327146  152408 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.327366  152408 main.go:141] libmachine: (functional-732631) Calling .GetState
I1020 12:09:20.329703  152408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.329752  152408 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.344224  152408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
I1020 12:09:20.344792  152408 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.345385  152408 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.345438  152408 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.345809  152408 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.345992  152408 main.go:141] libmachine: (functional-732631) Calling .DriverName
I1020 12:09:20.346192  152408 ssh_runner.go:195] Run: systemctl --version
I1020 12:09:20.346221  152408 main.go:141] libmachine: (functional-732631) Calling .GetSSHHostname
I1020 12:09:20.349843  152408 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.350358  152408 main.go:141] libmachine: (functional-732631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:f0:ea", ip: ""} in network mk-functional-732631: {Iface:virbr1 ExpiryTime:2025-10-20 13:06:22 +0000 UTC Type:0 Mac:52:54:00:fa:f0:ea Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-732631 Clientid:01:52:54:00:fa:f0:ea}
I1020 12:09:20.350393  152408 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined IP address 192.168.39.52 and MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.350581  152408 main.go:141] libmachine: (functional-732631) Calling .GetSSHPort
I1020 12:09:20.350803  152408 main.go:141] libmachine: (functional-732631) Calling .GetSSHKeyPath
I1020 12:09:20.350970  152408 main.go:141] libmachine: (functional-732631) Calling .GetSSHUsername
I1020 12:09:20.351120  152408 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/functional-732631/id_rsa Username:docker}
I1020 12:09:20.457809  152408 ssh_runner.go:195] Run: sudo crictl images --output json
I1020 12:09:20.541135  152408 main.go:141] libmachine: Making call to close driver server
I1020 12:09:20.541155  152408 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:20.541484  152408 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:20.541507  152408 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:20.541517  152408 main.go:141] libmachine: Making call to close driver server
I1020 12:09:20.541525  152408 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:20.541785  152408 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:20.541826  152408 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:20.541834  152408 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732631 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-732631
size: "4943877"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 0507ec6b7d5ff64384aafe9b9bf7f3ff018fc1320735b0612169ff9918b8241b
repoDigests:
- localhost/minikube-local-cache-test@sha256:ab5f790897f3cfd2b5c94b2b290bf353ee3a4118cac4219ef6272e7ce859bf41
repoTags:
- localhost/minikube-local-cache-test:functional-732631
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732631 image ls --format yaml --alsologtostderr:
I1020 12:09:19.659242  152263 out.go:360] Setting OutFile to fd 1 ...
I1020 12:09:19.659900  152263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:19.660029  152263 out.go:374] Setting ErrFile to fd 2...
I1020 12:09:19.660048  152263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:19.660353  152263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
I1020 12:09:19.661029  152263 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:19.661144  152263 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:19.661584  152263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:19.661638  152263 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:19.676759  152263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
I1020 12:09:19.677361  152263 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:19.677926  152263 main.go:141] libmachine: Using API Version  1
I1020 12:09:19.677948  152263 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:19.678469  152263 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:19.678690  152263 main.go:141] libmachine: (functional-732631) Calling .GetState
I1020 12:09:19.681037  152263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:19.681083  152263 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:19.695755  152263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
I1020 12:09:19.696274  152263 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:19.696840  152263 main.go:141] libmachine: Using API Version  1
I1020 12:09:19.696882  152263 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:19.697310  152263 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:19.697533  152263 main.go:141] libmachine: (functional-732631) Calling .DriverName
I1020 12:09:19.697760  152263 ssh_runner.go:195] Run: systemctl --version
I1020 12:09:19.697786  152263 main.go:141] libmachine: (functional-732631) Calling .GetSSHHostname
I1020 12:09:19.701238  152263 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:19.701752  152263 main.go:141] libmachine: (functional-732631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:f0:ea", ip: ""} in network mk-functional-732631: {Iface:virbr1 ExpiryTime:2025-10-20 13:06:22 +0000 UTC Type:0 Mac:52:54:00:fa:f0:ea Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-732631 Clientid:01:52:54:00:fa:f0:ea}
I1020 12:09:19.701789  152263 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined IP address 192.168.39.52 and MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:19.701954  152263 main.go:141] libmachine: (functional-732631) Calling .GetSSHPort
I1020 12:09:19.702110  152263 main.go:141] libmachine: (functional-732631) Calling .GetSSHKeyPath
I1020 12:09:19.702246  152263 main.go:141] libmachine: (functional-732631) Calling .GetSSHUsername
I1020 12:09:19.702429  152263 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/functional-732631/id_rsa Username:docker}
I1020 12:09:19.792388  152263 ssh_runner.go:195] Run: sudo crictl images --output json
I1020 12:09:19.900476  152263 main.go:141] libmachine: Making call to close driver server
I1020 12:09:19.900497  152263 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:19.900835  152263 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:19.900857  152263 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:19.900867  152263 main.go:141] libmachine: Making call to close driver server
I1020 12:09:19.900876  152263 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:19.900879  152263 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:19.901160  152263 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:19.901183  152263 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:19.901184  152263 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh pgrep buildkitd: exit status 1 (243.843446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image build -t localhost/my-image:functional-732631 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 image build -t localhost/my-image:functional-732631 testdata/build --alsologtostderr: (4.945502964s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732631 image build -t localhost/my-image:functional-732631 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fe6471e9379
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-732631
--> 83121187e06
Successfully tagged localhost/my-image:functional-732631
83121187e06448d9ab177231dfb7dee5aaa37ab892813c4bc9e5bd006de56b68
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732631 image build -t localhost/my-image:functional-732631 testdata/build --alsologtostderr:
I1020 12:09:20.214817  152374 out.go:360] Setting OutFile to fd 1 ...
I1020 12:09:20.214966  152374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.214976  152374 out.go:374] Setting ErrFile to fd 2...
I1020 12:09:20.214981  152374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:09:20.215159  152374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
I1020 12:09:20.215803  152374 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.216520  152374 config.go:182] Loaded profile config "functional-732631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:09:20.216926  152374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.216970  152374 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.232965  152374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
I1020 12:09:20.233828  152374 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.234553  152374 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.234574  152374 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.235086  152374 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.235284  152374 main.go:141] libmachine: (functional-732631) Calling .GetState
I1020 12:09:20.237701  152374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1020 12:09:20.237746  152374 main.go:141] libmachine: Launching plugin server for driver kvm2
I1020 12:09:20.253950  152374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
I1020 12:09:20.254591  152374 main.go:141] libmachine: () Calling .GetVersion
I1020 12:09:20.255228  152374 main.go:141] libmachine: Using API Version  1
I1020 12:09:20.255260  152374 main.go:141] libmachine: () Calling .SetConfigRaw
I1020 12:09:20.255672  152374 main.go:141] libmachine: () Calling .GetMachineName
I1020 12:09:20.255871  152374 main.go:141] libmachine: (functional-732631) Calling .DriverName
I1020 12:09:20.256085  152374 ssh_runner.go:195] Run: systemctl --version
I1020 12:09:20.256119  152374 main.go:141] libmachine: (functional-732631) Calling .GetSSHHostname
I1020 12:09:20.260294  152374 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.260773  152374 main.go:141] libmachine: (functional-732631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:f0:ea", ip: ""} in network mk-functional-732631: {Iface:virbr1 ExpiryTime:2025-10-20 13:06:22 +0000 UTC Type:0 Mac:52:54:00:fa:f0:ea Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:functional-732631 Clientid:01:52:54:00:fa:f0:ea}
I1020 12:09:20.260807  152374 main.go:141] libmachine: (functional-732631) DBG | domain functional-732631 has defined IP address 192.168.39.52 and MAC address 52:54:00:fa:f0:ea in network mk-functional-732631
I1020 12:09:20.261068  152374 main.go:141] libmachine: (functional-732631) Calling .GetSSHPort
I1020 12:09:20.261302  152374 main.go:141] libmachine: (functional-732631) Calling .GetSSHKeyPath
I1020 12:09:20.261496  152374 main.go:141] libmachine: (functional-732631) Calling .GetSSHUsername
I1020 12:09:20.261653  152374 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/functional-732631/id_rsa Username:docker}
I1020 12:09:20.367209  152374 build_images.go:161] Building image from path: /tmp/build.2722078968.tar
I1020 12:09:20.367297  152374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1020 12:09:20.395502  152374 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2722078968.tar
I1020 12:09:20.406385  152374 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2722078968.tar: stat -c "%s %y" /var/lib/minikube/build/build.2722078968.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2722078968.tar': No such file or directory
I1020 12:09:20.406462  152374 ssh_runner.go:362] scp /tmp/build.2722078968.tar --> /var/lib/minikube/build/build.2722078968.tar (3072 bytes)
I1020 12:09:20.483042  152374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2722078968
I1020 12:09:20.526263  152374 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2722078968 -xf /var/lib/minikube/build/build.2722078968.tar
I1020 12:09:20.544347  152374 crio.go:315] Building image: /var/lib/minikube/build/build.2722078968
I1020 12:09:20.544452  152374 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-732631 /var/lib/minikube/build/build.2722078968 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1020 12:09:25.062335  152374 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-732631 /var/lib/minikube/build/build.2722078968 --cgroup-manager=cgroupfs: (4.517847777s)
I1020 12:09:25.062430  152374 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2722078968
I1020 12:09:25.078305  152374 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2722078968.tar
I1020 12:09:25.091900  152374 build_images.go:217] Built localhost/my-image:functional-732631 from /tmp/build.2722078968.tar
I1020 12:09:25.091955  152374 build_images.go:133] succeeded building to: functional-732631
I1020 12:09:25.091962  152374 build_images.go:134] failed building to: 
I1020 12:09:25.092052  152374 main.go:141] libmachine: Making call to close driver server
I1020 12:09:25.092078  152374 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:25.092428  152374 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
I1020 12:09:25.092456  152374 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:25.092473  152374 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:25.092489  152374 main.go:141] libmachine: Making call to close driver server
I1020 12:09:25.092498  152374 main.go:141] libmachine: (functional-732631) Calling .Close
I1020 12:09:25.092752  152374 main.go:141] libmachine: Successfully made call to close driver server
I1020 12:09:25.092772  152374 main.go:141] libmachine: Making call to close connection to plugin binary
I1020 12:09:25.092774  152374 main.go:141] libmachine: (functional-732631) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
2025/10/20 12:09:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.926702496s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-732631
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image load --daemon kicbase/echo-server:functional-732631 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 image load --daemon kicbase/echo-server:functional-732631 --alsologtostderr: (1.283100424s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image load --daemon kicbase/echo-server:functional-732631 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-732631
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image load --daemon kicbase/echo-server:functional-732631 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image save kicbase/echo-server:functional-732631 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 image save kicbase/echo-server:functional-732631 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.432981701s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image rm kicbase/echo-server:functional-732631 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-732631
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 image save --daemon kicbase/echo-server:functional-732631 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-732631
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-732631 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-732631 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lvfcd" [c7bc7b35-454c-4a9a-b686-5aafb476f695] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lvfcd" [c7bc7b35-454c-4a9a-b686-5aafb476f695] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.004349106s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "309.024859ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "56.177385ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdany-port1803744765/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760962146947265393" to /tmp/TestFunctionalparallelMountCmdany-port1803744765/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760962146947265393" to /tmp/TestFunctionalparallelMountCmdany-port1803744765/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760962146947265393" to /tmp/TestFunctionalparallelMountCmdany-port1803744765/001/test-1760962146947265393
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.844808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:09:07.162612  143131 retry.go:31] will retry after 565.360326ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 20 12:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 20 12:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 20 12:09 test-1760962146947265393
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh cat /mount-9p/test-1760962146947265393
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-732631 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [80ca4b3b-ea15-4ace-ac02-c98fc0f50c6c] Pending
helpers_test.go:352: "busybox-mount" [80ca4b3b-ea15-4ace-ac02-c98fc0f50c6c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [80ca4b3b-ea15-4ace-ac02-c98fc0f50c6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [80ca4b3b-ea15-4ace-ac02-c98fc0f50c6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006374059s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-732631 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdany-port1803744765/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "301.473031ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.705138ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 service list: (1.280161572s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-732631 service list -o json: (1.325463082s)
functional_test.go:1504: Took "1.325568019s" to run "out/minikube-linux-amd64 -p functional-732631 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdspecific-port1487186527/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.341986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:09:16.907380  143131 retry.go:31] will retry after 686.886013ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdspecific-port1487186527/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "sudo umount -f /mount-9p": exit status 1 (256.87323ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-732631 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdspecific-port1487186527/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.52:31562
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.52:31562
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T" /mount1: exit status 1 (288.027665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:09:18.980342  143131 retry.go:31] will retry after 513.984318ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732631 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-732631 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732631 /tmp/TestFunctionalparallelMountCmdVerifyCleanup305765523/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-732631
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-732631
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-732631
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:10:47.919932  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:11:15.629947  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m19.174556263s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 kubectl -- rollout status deployment/busybox: (5.226490157s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-268s8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-6b2km -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-gf7xh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-268s8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-6b2km -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-gf7xh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-268s8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-6b2km -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-gf7xh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-268s8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-268s8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-6b2km -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-6b2km -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-gf7xh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 kubectl -- exec busybox-7b57f96db7-gf7xh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 node add --alsologtostderr -v 5: (43.370353197s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
E1020 12:13:43.771896  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.778392  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.790020  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.811664  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.853067  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:43.934986  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-456667 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1020 12:13:44.096678  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:13:44.418203  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --output json --alsologtostderr -v 5
E1020 12:13:45.059862  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp testdata/cp-test.txt ha-456667:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630833950/001/cp-test_ha-456667.txt
E1020 12:13:46.341797  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667:/home/docker/cp-test.txt ha-456667-m02:/home/docker/cp-test_ha-456667_ha-456667-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test_ha-456667_ha-456667-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667:/home/docker/cp-test.txt ha-456667-m03:/home/docker/cp-test_ha-456667_ha-456667-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test_ha-456667_ha-456667-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667:/home/docker/cp-test.txt ha-456667-m04:/home/docker/cp-test_ha-456667_ha-456667-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test_ha-456667_ha-456667-m04.txt"
E1020 12:13:48.904009  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp testdata/cp-test.txt ha-456667-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630833950/001/cp-test_ha-456667-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m02:/home/docker/cp-test.txt ha-456667:/home/docker/cp-test_ha-456667-m02_ha-456667.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test_ha-456667-m02_ha-456667.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m02:/home/docker/cp-test.txt ha-456667-m03:/home/docker/cp-test_ha-456667-m02_ha-456667-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test_ha-456667-m02_ha-456667-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m02:/home/docker/cp-test.txt ha-456667-m04:/home/docker/cp-test_ha-456667-m02_ha-456667-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test_ha-456667-m02_ha-456667-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp testdata/cp-test.txt ha-456667-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630833950/001/cp-test_ha-456667-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m03:/home/docker/cp-test.txt ha-456667:/home/docker/cp-test_ha-456667-m03_ha-456667.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test_ha-456667-m03_ha-456667.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m03:/home/docker/cp-test.txt ha-456667-m02:/home/docker/cp-test_ha-456667-m03_ha-456667-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test.txt"
E1020 12:13:54.026214  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test_ha-456667-m03_ha-456667-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m03:/home/docker/cp-test.txt ha-456667-m04:/home/docker/cp-test_ha-456667-m03_ha-456667-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test_ha-456667-m03_ha-456667-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp testdata/cp-test.txt ha-456667-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1630833950/001/cp-test_ha-456667-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m04:/home/docker/cp-test.txt ha-456667:/home/docker/cp-test_ha-456667-m04_ha-456667.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667 "sudo cat /home/docker/cp-test_ha-456667-m04_ha-456667.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m04:/home/docker/cp-test.txt ha-456667-m02:/home/docker/cp-test_ha-456667-m04_ha-456667-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m02 "sudo cat /home/docker/cp-test_ha-456667-m04_ha-456667-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 cp ha-456667-m04:/home/docker/cp-test.txt ha-456667-m03:/home/docker/cp-test_ha-456667-m04_ha-456667-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 ssh -n ha-456667-m03 "sudo cat /home/docker/cp-test_ha-456667-m04_ha-456667-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (76.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node stop m02 --alsologtostderr -v 5
E1020 12:14:04.268265  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:14:24.750230  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:15:05.712584  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 node stop m02 --alsologtostderr -v 5: (1m15.398840361s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5: exit status 7 (673.996686ms)

                                                
                                                
-- stdout --
	ha-456667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-456667-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-456667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-456667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:15:13.661203  156948 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:15:13.661477  156948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:15:13.661486  156948 out.go:374] Setting ErrFile to fd 2...
	I1020 12:15:13.661490  156948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:15:13.661682  156948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:15:13.661850  156948 out.go:368] Setting JSON to false
	I1020 12:15:13.661882  156948 mustload.go:65] Loading cluster: ha-456667
	I1020 12:15:13.661992  156948 notify.go:220] Checking for updates...
	I1020 12:15:13.662271  156948 config.go:182] Loaded profile config "ha-456667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:15:13.662287  156948 status.go:174] checking status of ha-456667 ...
	I1020 12:15:13.662725  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.662764  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.683624  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I1020 12:15:13.684389  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.685032  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.685054  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.685503  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.685757  156948 main.go:141] libmachine: (ha-456667) Calling .GetState
	I1020 12:15:13.687730  156948 status.go:371] ha-456667 host status = "Running" (err=<nil>)
	I1020 12:15:13.687767  156948 host.go:66] Checking if "ha-456667" exists ...
	I1020 12:15:13.688071  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.688126  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.702659  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1020 12:15:13.703191  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.703751  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.703774  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.704129  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.704370  156948 main.go:141] libmachine: (ha-456667) Calling .GetIP
	I1020 12:15:13.707906  156948 main.go:141] libmachine: (ha-456667) DBG | domain ha-456667 has defined MAC address 52:54:00:7f:2a:bb in network mk-ha-456667
	I1020 12:15:13.708491  156948 main.go:141] libmachine: (ha-456667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2a:bb", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:09:46 +0000 UTC Type:0 Mac:52:54:00:7f:2a:bb Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-456667 Clientid:01:52:54:00:7f:2a:bb}
	I1020 12:15:13.708516  156948 main.go:141] libmachine: (ha-456667) DBG | domain ha-456667 has defined IP address 192.168.39.176 and MAC address 52:54:00:7f:2a:bb in network mk-ha-456667
	I1020 12:15:13.708769  156948 host.go:66] Checking if "ha-456667" exists ...
	I1020 12:15:13.709196  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.709259  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.724273  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I1020 12:15:13.724911  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.725474  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.725497  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.725833  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.726082  156948 main.go:141] libmachine: (ha-456667) Calling .DriverName
	I1020 12:15:13.726295  156948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:15:13.726322  156948 main.go:141] libmachine: (ha-456667) Calling .GetSSHHostname
	I1020 12:15:13.729392  156948 main.go:141] libmachine: (ha-456667) DBG | domain ha-456667 has defined MAC address 52:54:00:7f:2a:bb in network mk-ha-456667
	I1020 12:15:13.729930  156948 main.go:141] libmachine: (ha-456667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:2a:bb", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:09:46 +0000 UTC Type:0 Mac:52:54:00:7f:2a:bb Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-456667 Clientid:01:52:54:00:7f:2a:bb}
	I1020 12:15:13.729961  156948 main.go:141] libmachine: (ha-456667) DBG | domain ha-456667 has defined IP address 192.168.39.176 and MAC address 52:54:00:7f:2a:bb in network mk-ha-456667
	I1020 12:15:13.730153  156948 main.go:141] libmachine: (ha-456667) Calling .GetSSHPort
	I1020 12:15:13.730390  156948 main.go:141] libmachine: (ha-456667) Calling .GetSSHKeyPath
	I1020 12:15:13.730596  156948 main.go:141] libmachine: (ha-456667) Calling .GetSSHUsername
	I1020 12:15:13.730753  156948 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/ha-456667/id_rsa Username:docker}
	I1020 12:15:13.817926  156948 ssh_runner.go:195] Run: systemctl --version
	I1020 12:15:13.824742  156948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:15:13.842277  156948 kubeconfig.go:125] found "ha-456667" server: "https://192.168.39.254:8443"
	I1020 12:15:13.842323  156948 api_server.go:166] Checking apiserver status ...
	I1020 12:15:13.842370  156948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:15:13.863899  156948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	W1020 12:15:13.880938  156948 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:15:13.881001  156948 ssh_runner.go:195] Run: ls
	I1020 12:15:13.887444  156948 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1020 12:15:13.893315  156948 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1020 12:15:13.893348  156948 status.go:463] ha-456667 apiserver status = Running (err=<nil>)
	I1020 12:15:13.893360  156948 status.go:176] ha-456667 status: &{Name:ha-456667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:15:13.893383  156948 status.go:174] checking status of ha-456667-m02 ...
	I1020 12:15:13.893830  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.893882  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.909228  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38913
	I1020 12:15:13.909731  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.910278  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.910317  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.910735  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.910933  156948 main.go:141] libmachine: (ha-456667-m02) Calling .GetState
	I1020 12:15:13.912940  156948 status.go:371] ha-456667-m02 host status = "Stopped" (err=<nil>)
	I1020 12:15:13.912956  156948 status.go:384] host is not running, skipping remaining checks
	I1020 12:15:13.912962  156948 status.go:176] ha-456667-m02 status: &{Name:ha-456667-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:15:13.912983  156948 status.go:174] checking status of ha-456667-m03 ...
	I1020 12:15:13.913287  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.913330  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.927522  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1020 12:15:13.928013  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.928610  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.928636  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.928983  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.929181  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetState
	I1020 12:15:13.931165  156948 status.go:371] ha-456667-m03 host status = "Running" (err=<nil>)
	I1020 12:15:13.931187  156948 host.go:66] Checking if "ha-456667-m03" exists ...
	I1020 12:15:13.931530  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.931574  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.945935  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38533
	I1020 12:15:13.946395  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.947038  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.947075  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.947568  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.947808  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetIP
	I1020 12:15:13.951681  156948 main.go:141] libmachine: (ha-456667-m03) DBG | domain ha-456667-m03 has defined MAC address 52:54:00:0a:1c:7f in network mk-ha-456667
	I1020 12:15:13.952253  156948 main.go:141] libmachine: (ha-456667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1c:7f", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:11:44 +0000 UTC Type:0 Mac:52:54:00:0a:1c:7f Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:ha-456667-m03 Clientid:01:52:54:00:0a:1c:7f}
	I1020 12:15:13.952292  156948 main.go:141] libmachine: (ha-456667-m03) DBG | domain ha-456667-m03 has defined IP address 192.168.39.81 and MAC address 52:54:00:0a:1c:7f in network mk-ha-456667
	I1020 12:15:13.952522  156948 host.go:66] Checking if "ha-456667-m03" exists ...
	I1020 12:15:13.952857  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:13.952917  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:13.968955  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I1020 12:15:13.969448  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:13.969882  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:13.969905  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:13.970298  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:13.970516  156948 main.go:141] libmachine: (ha-456667-m03) Calling .DriverName
	I1020 12:15:13.970738  156948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:15:13.970761  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetSSHHostname
	I1020 12:15:13.973969  156948 main.go:141] libmachine: (ha-456667-m03) DBG | domain ha-456667-m03 has defined MAC address 52:54:00:0a:1c:7f in network mk-ha-456667
	I1020 12:15:13.974497  156948 main.go:141] libmachine: (ha-456667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1c:7f", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:11:44 +0000 UTC Type:0 Mac:52:54:00:0a:1c:7f Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:ha-456667-m03 Clientid:01:52:54:00:0a:1c:7f}
	I1020 12:15:13.974527  156948 main.go:141] libmachine: (ha-456667-m03) DBG | domain ha-456667-m03 has defined IP address 192.168.39.81 and MAC address 52:54:00:0a:1c:7f in network mk-ha-456667
	I1020 12:15:13.974707  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetSSHPort
	I1020 12:15:13.974888  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetSSHKeyPath
	I1020 12:15:13.975054  156948 main.go:141] libmachine: (ha-456667-m03) Calling .GetSSHUsername
	I1020 12:15:13.975226  156948 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/ha-456667-m03/id_rsa Username:docker}
	I1020 12:15:14.061268  156948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:15:14.080788  156948 kubeconfig.go:125] found "ha-456667" server: "https://192.168.39.254:8443"
	I1020 12:15:14.080824  156948 api_server.go:166] Checking apiserver status ...
	I1020 12:15:14.080906  156948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:15:14.100341  156948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup
	W1020 12:15:14.115261  156948 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:15:14.115323  156948 ssh_runner.go:195] Run: ls
	I1020 12:15:14.120289  156948 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1020 12:15:14.125346  156948 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1020 12:15:14.125377  156948 status.go:463] ha-456667-m03 apiserver status = Running (err=<nil>)
	I1020 12:15:14.125389  156948 status.go:176] ha-456667-m03 status: &{Name:ha-456667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:15:14.125421  156948 status.go:174] checking status of ha-456667-m04 ...
	I1020 12:15:14.125859  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:14.125931  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:14.140823  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I1020 12:15:14.141331  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:14.141781  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:14.141802  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:14.142136  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:14.142349  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetState
	I1020 12:15:14.144439  156948 status.go:371] ha-456667-m04 host status = "Running" (err=<nil>)
	I1020 12:15:14.144457  156948 host.go:66] Checking if "ha-456667-m04" exists ...
	I1020 12:15:14.144769  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:14.144814  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:14.158851  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I1020 12:15:14.159251  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:14.159740  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:14.159767  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:14.160168  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:14.160415  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetIP
	I1020 12:15:14.163572  156948 main.go:141] libmachine: (ha-456667-m04) DBG | domain ha-456667-m04 has defined MAC address 52:54:00:b6:5f:72 in network mk-ha-456667
	I1020 12:15:14.164087  156948 main.go:141] libmachine: (ha-456667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5f:72", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:13:15 +0000 UTC Type:0 Mac:52:54:00:b6:5f:72 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-456667-m04 Clientid:01:52:54:00:b6:5f:72}
	I1020 12:15:14.164103  156948 main.go:141] libmachine: (ha-456667-m04) DBG | domain ha-456667-m04 has defined IP address 192.168.39.69 and MAC address 52:54:00:b6:5f:72 in network mk-ha-456667
	I1020 12:15:14.164265  156948 host.go:66] Checking if "ha-456667-m04" exists ...
	I1020 12:15:14.164585  156948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:15:14.164633  156948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:15:14.178914  156948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
	I1020 12:15:14.179595  156948 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:15:14.180165  156948 main.go:141] libmachine: Using API Version  1
	I1020 12:15:14.180187  156948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:15:14.180622  156948 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:15:14.180849  156948 main.go:141] libmachine: (ha-456667-m04) Calling .DriverName
	I1020 12:15:14.181034  156948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:15:14.181056  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetSSHHostname
	I1020 12:15:14.184136  156948 main.go:141] libmachine: (ha-456667-m04) DBG | domain ha-456667-m04 has defined MAC address 52:54:00:b6:5f:72 in network mk-ha-456667
	I1020 12:15:14.184745  156948 main.go:141] libmachine: (ha-456667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:5f:72", ip: ""} in network mk-ha-456667: {Iface:virbr1 ExpiryTime:2025-10-20 13:13:15 +0000 UTC Type:0 Mac:52:54:00:b6:5f:72 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-456667-m04 Clientid:01:52:54:00:b6:5f:72}
	I1020 12:15:14.184776  156948 main.go:141] libmachine: (ha-456667-m04) DBG | domain ha-456667-m04 has defined IP address 192.168.39.69 and MAC address 52:54:00:b6:5f:72 in network mk-ha-456667
	I1020 12:15:14.184988  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetSSHPort
	I1020 12:15:14.185164  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetSSHKeyPath
	I1020 12:15:14.185306  156948 main.go:141] libmachine: (ha-456667-m04) Calling .GetSSHUsername
	I1020 12:15:14.185451  156948 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/ha-456667-m04/id_rsa Username:docker}
	I1020 12:15:14.265346  156948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:15:14.282191  156948 status.go:176] ha-456667-m04 status: &{Name:ha-456667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (76.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 node start m02 --alsologtostderr -v 5: (31.931497156s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
E1020 12:15:47.920238  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5: (1.173385611s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.108979074s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (502.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 stop --alsologtostderr -v 5
E1020 12:16:27.634857  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:18:43.771605  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:19:11.477597  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 stop --alsologtostderr -v 5: (4m17.668503608s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 start --wait true --alsologtostderr -v 5
E1020 12:20:47.920303  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:22:10.992006  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:23:43.771459  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 start --wait true --alsologtostderr -v 5: (4m4.46756884s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (502.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 node delete m03 --alsologtostderr -v 5: (17.645109173s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (262.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 stop --alsologtostderr -v 5
E1020 12:25:47.923312  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:28:43.776526  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 stop --alsologtostderr -v 5: (4m22.119731457s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5: exit status 7 (106.56517ms)

                                                
                                                
-- stdout --
	ha-456667
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-456667-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-456667-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:28:52.812680  161715 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:28:52.812935  161715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:28:52.812944  161715 out.go:374] Setting ErrFile to fd 2...
	I1020 12:28:52.812948  161715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:28:52.813128  161715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:28:52.813320  161715 out.go:368] Setting JSON to false
	I1020 12:28:52.813353  161715 mustload.go:65] Loading cluster: ha-456667
	I1020 12:28:52.813466  161715 notify.go:220] Checking for updates...
	I1020 12:28:52.813771  161715 config.go:182] Loaded profile config "ha-456667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:28:52.813790  161715 status.go:174] checking status of ha-456667 ...
	I1020 12:28:52.814204  161715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:28:52.814253  161715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:28:52.828215  161715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I1020 12:28:52.828659  161715 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:28:52.829177  161715 main.go:141] libmachine: Using API Version  1
	I1020 12:28:52.829205  161715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:28:52.829659  161715 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:28:52.829916  161715 main.go:141] libmachine: (ha-456667) Calling .GetState
	I1020 12:28:52.831718  161715 status.go:371] ha-456667 host status = "Stopped" (err=<nil>)
	I1020 12:28:52.831735  161715 status.go:384] host is not running, skipping remaining checks
	I1020 12:28:52.831744  161715 status.go:176] ha-456667 status: &{Name:ha-456667 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:28:52.831780  161715 status.go:174] checking status of ha-456667-m02 ...
	I1020 12:28:52.832232  161715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:28:52.832287  161715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:28:52.846283  161715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1020 12:28:52.846730  161715 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:28:52.847185  161715 main.go:141] libmachine: Using API Version  1
	I1020 12:28:52.847206  161715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:28:52.847564  161715 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:28:52.847760  161715 main.go:141] libmachine: (ha-456667-m02) Calling .GetState
	I1020 12:28:52.849835  161715 status.go:371] ha-456667-m02 host status = "Stopped" (err=<nil>)
	I1020 12:28:52.849849  161715 status.go:384] host is not running, skipping remaining checks
	I1020 12:28:52.849855  161715 status.go:176] ha-456667-m02 status: &{Name:ha-456667-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:28:52.849886  161715 status.go:174] checking status of ha-456667-m04 ...
	I1020 12:28:52.850181  161715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:28:52.850221  161715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:28:52.863865  161715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I1020 12:28:52.864330  161715 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:28:52.864779  161715 main.go:141] libmachine: Using API Version  1
	I1020 12:28:52.864801  161715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:28:52.865154  161715 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:28:52.865348  161715 main.go:141] libmachine: (ha-456667-m04) Calling .GetState
	I1020 12:28:52.867274  161715 status.go:371] ha-456667-m04 host status = "Stopped" (err=<nil>)
	I1020 12:28:52.867287  161715 status.go:384] host is not running, skipping remaining checks
	I1020 12:28:52.867292  161715 status.go:176] ha-456667-m04 status: &{Name:ha-456667-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (262.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:30:06.840685  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:30:47.920630  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m2.98475314s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (123.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-456667 node add --control-plane --alsologtostderr -v 5: (1m8.473743089s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-456667 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-687990 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-687990 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.049063993s)
--- PASS: TestJSONOutput/start/Command (76.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-687990 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-687990 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-687990 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-687990 --output=json --user=testUser: (7.417725808s)
--- PASS: TestJSONOutput/stop/Command (7.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-561154 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-561154 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.588967ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b624d7c-4e86-49ad-9b66-008e775e488c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-561154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd18503b-a937-462f-8444-740a8e6b8b67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21773"}}
	{"specversion":"1.0","id":"dc697b2a-880c-459f-8a0d-5c3c164800b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e077ddce-6e0f-4109-b4f5-b1c9cceb195f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig"}}
	{"specversion":"1.0","id":"baf02783-21b7-4e36-9f6d-fbcc3fd99eca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube"}}
	{"specversion":"1.0","id":"6c99592c-6510-4265-8727-aec0910361c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dd5d05b0-95fa-4e4b-b1eb-674b2a2d36ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"391fd2cf-2323-40a3-8399-aea85ef977a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-561154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-561154
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (77.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-423699 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:33:43.775640  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-423699 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.219492412s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-426771 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-426771 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.739718668s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-423699
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-426771
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-426771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-426771
helpers_test.go:175: Cleaning up "first-423699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-423699
--- PASS: TestMinikubeProfile (77.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-674103 --memory=3072 --mount-string /tmp/TestMountStartserial3588094550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-674103 --memory=3072 --mount-string /tmp/TestMountStartserial3588094550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.045615182s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-674103 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-674103 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-692394 --memory=3072 --mount-string /tmp/TestMountStartserial3588094550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-692394 --memory=3072 --mount-string /tmp/TestMountStartserial3588094550/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.592327649s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-674103 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-692394
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-692394: (1.218957788s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-692394
E1020 12:35:47.923962  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-692394: (18.536860079s)
--- PASS: TestMountStart/serial/RestartStopped (19.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-692394 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874962 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874962 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.809107702s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-874962 -- rollout status deployment/busybox: (4.243748396s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-8bh26 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-fxzbt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-8bh26 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-fxzbt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-8bh26 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-fxzbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-8bh26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-8bh26 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-fxzbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-874962 -- exec busybox-7b57f96db7-fxzbt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (62.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874962 -v=5 --alsologtostderr
E1020 12:38:43.771243  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-874962 -v=5 --alsologtostderr: (1m1.846812078s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (62.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-874962 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp testdata/cp-test.txt multinode-874962:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3914427825/001/cp-test_multinode-874962.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test.txt"
E1020 12:38:50.994285  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962:/home/docker/cp-test.txt multinode-874962-m02:/home/docker/cp-test_multinode-874962_multinode-874962-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test_multinode-874962_multinode-874962-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962:/home/docker/cp-test.txt multinode-874962-m03:/home/docker/cp-test_multinode-874962_multinode-874962-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test_multinode-874962_multinode-874962-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp testdata/cp-test.txt multinode-874962-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3914427825/001/cp-test_multinode-874962-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m02:/home/docker/cp-test.txt multinode-874962:/home/docker/cp-test_multinode-874962-m02_multinode-874962.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test_multinode-874962-m02_multinode-874962.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m02:/home/docker/cp-test.txt multinode-874962-m03:/home/docker/cp-test_multinode-874962-m02_multinode-874962-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test_multinode-874962-m02_multinode-874962-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp testdata/cp-test.txt multinode-874962-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3914427825/001/cp-test_multinode-874962-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m03:/home/docker/cp-test.txt multinode-874962:/home/docker/cp-test_multinode-874962-m03_multinode-874962.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962 "sudo cat /home/docker/cp-test_multinode-874962-m03_multinode-874962.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 cp multinode-874962-m03:/home/docker/cp-test.txt multinode-874962-m02:/home/docker/cp-test_multinode-874962-m03_multinode-874962-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 ssh -n multinode-874962-m02 "sudo cat /home/docker/cp-test_multinode-874962-m03_multinode-874962-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-874962 node stop m03: (1.720051095s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874962 status: exit status 7 (431.940351ms)

                                                
                                                
-- stdout --
	multinode-874962
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874962-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874962-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr: exit status 7 (436.183443ms)

                                                
                                                
-- stdout --
	multinode-874962
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-874962-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-874962-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:38:59.196608  169521 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:59.196734  169521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:59.196743  169521 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:59.196747  169521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:59.196936  169521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:38:59.197102  169521 out.go:368] Setting JSON to false
	I1020 12:38:59.197137  169521 mustload.go:65] Loading cluster: multinode-874962
	I1020 12:38:59.197189  169521 notify.go:220] Checking for updates...
	I1020 12:38:59.197496  169521 config.go:182] Loaded profile config "multinode-874962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:59.197512  169521 status.go:174] checking status of multinode-874962 ...
	I1020 12:38:59.197900  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.197936  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.216143  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I1020 12:38:59.216730  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.217391  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.217435  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.217807  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.217990  169521 main.go:141] libmachine: (multinode-874962) Calling .GetState
	I1020 12:38:59.219684  169521 status.go:371] multinode-874962 host status = "Running" (err=<nil>)
	I1020 12:38:59.219706  169521 host.go:66] Checking if "multinode-874962" exists ...
	I1020 12:38:59.220031  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.220075  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.232966  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1020 12:38:59.233326  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.233714  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.233745  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.234068  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.234256  169521 main.go:141] libmachine: (multinode-874962) Calling .GetIP
	I1020 12:38:59.237056  169521 main.go:141] libmachine: (multinode-874962) DBG | domain multinode-874962 has defined MAC address 52:54:00:41:6f:93 in network mk-multinode-874962
	I1020 12:38:59.237529  169521 main.go:141] libmachine: (multinode-874962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6f:93", ip: ""} in network mk-multinode-874962: {Iface:virbr1 ExpiryTime:2025-10-20 13:36:15 +0000 UTC Type:0 Mac:52:54:00:41:6f:93 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-874962 Clientid:01:52:54:00:41:6f:93}
	I1020 12:38:59.237561  169521 main.go:141] libmachine: (multinode-874962) DBG | domain multinode-874962 has defined IP address 192.168.39.169 and MAC address 52:54:00:41:6f:93 in network mk-multinode-874962
	I1020 12:38:59.237689  169521 host.go:66] Checking if "multinode-874962" exists ...
	I1020 12:38:59.237952  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.237986  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.250980  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I1020 12:38:59.251343  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.251779  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.251800  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.252123  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.252321  169521 main.go:141] libmachine: (multinode-874962) Calling .DriverName
	I1020 12:38:59.252514  169521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:59.252554  169521 main.go:141] libmachine: (multinode-874962) Calling .GetSSHHostname
	I1020 12:38:59.255937  169521 main.go:141] libmachine: (multinode-874962) DBG | domain multinode-874962 has defined MAC address 52:54:00:41:6f:93 in network mk-multinode-874962
	I1020 12:38:59.256439  169521 main.go:141] libmachine: (multinode-874962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6f:93", ip: ""} in network mk-multinode-874962: {Iface:virbr1 ExpiryTime:2025-10-20 13:36:15 +0000 UTC Type:0 Mac:52:54:00:41:6f:93 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-874962 Clientid:01:52:54:00:41:6f:93}
	I1020 12:38:59.256467  169521 main.go:141] libmachine: (multinode-874962) DBG | domain multinode-874962 has defined IP address 192.168.39.169 and MAC address 52:54:00:41:6f:93 in network mk-multinode-874962
	I1020 12:38:59.256638  169521 main.go:141] libmachine: (multinode-874962) Calling .GetSSHPort
	I1020 12:38:59.256802  169521 main.go:141] libmachine: (multinode-874962) Calling .GetSSHKeyPath
	I1020 12:38:59.256987  169521 main.go:141] libmachine: (multinode-874962) Calling .GetSSHUsername
	I1020 12:38:59.257097  169521 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/multinode-874962/id_rsa Username:docker}
	I1020 12:38:59.341437  169521 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:59.347264  169521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:59.366781  169521 kubeconfig.go:125] found "multinode-874962" server: "https://192.168.39.169:8443"
	I1020 12:38:59.366828  169521 api_server.go:166] Checking apiserver status ...
	I1020 12:38:59.366870  169521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:38:59.393151  169521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	W1020 12:38:59.404512  169521 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:38:59.404565  169521 ssh_runner.go:195] Run: ls
	I1020 12:38:59.409116  169521 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1020 12:38:59.414280  169521 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1020 12:38:59.414303  169521 status.go:463] multinode-874962 apiserver status = Running (err=<nil>)
	I1020 12:38:59.414317  169521 status.go:176] multinode-874962 status: &{Name:multinode-874962 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:38:59.414344  169521 status.go:174] checking status of multinode-874962-m02 ...
	I1020 12:38:59.414760  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.414809  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.428867  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I1020 12:38:59.429426  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.429955  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.429982  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.430333  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.430555  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetState
	I1020 12:38:59.432337  169521 status.go:371] multinode-874962-m02 host status = "Running" (err=<nil>)
	I1020 12:38:59.432351  169521 host.go:66] Checking if "multinode-874962-m02" exists ...
	I1020 12:38:59.432669  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.432711  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.445827  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I1020 12:38:59.446284  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.446763  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.446785  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.447103  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.447291  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetIP
	I1020 12:38:59.450050  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | domain multinode-874962-m02 has defined MAC address 52:54:00:a2:47:b4 in network mk-multinode-874962
	I1020 12:38:59.450572  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:47:b4", ip: ""} in network mk-multinode-874962: {Iface:virbr1 ExpiryTime:2025-10-20 13:37:10 +0000 UTC Type:0 Mac:52:54:00:a2:47:b4 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-874962-m02 Clientid:01:52:54:00:a2:47:b4}
	I1020 12:38:59.450594  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | domain multinode-874962-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:a2:47:b4 in network mk-multinode-874962
	I1020 12:38:59.450711  169521 host.go:66] Checking if "multinode-874962-m02" exists ...
	I1020 12:38:59.450988  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.451034  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.464616  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I1020 12:38:59.465175  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.465736  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.465762  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.466088  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.466313  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .DriverName
	I1020 12:38:59.466551  169521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:38:59.466584  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetSSHHostname
	I1020 12:38:59.469726  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | domain multinode-874962-m02 has defined MAC address 52:54:00:a2:47:b4 in network mk-multinode-874962
	I1020 12:38:59.470174  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:47:b4", ip: ""} in network mk-multinode-874962: {Iface:virbr1 ExpiryTime:2025-10-20 13:37:10 +0000 UTC Type:0 Mac:52:54:00:a2:47:b4 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-874962-m02 Clientid:01:52:54:00:a2:47:b4}
	I1020 12:38:59.470206  169521 main.go:141] libmachine: (multinode-874962-m02) DBG | domain multinode-874962-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:a2:47:b4 in network mk-multinode-874962
	I1020 12:38:59.470354  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetSSHPort
	I1020 12:38:59.470548  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetSSHKeyPath
	I1020 12:38:59.470718  169521 main.go:141] libmachine: (multinode-874962-m02) Calling .GetSSHUsername
	I1020 12:38:59.470868  169521 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21773-139101/.minikube/machines/multinode-874962-m02/id_rsa Username:docker}
	I1020 12:38:59.551925  169521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:38:59.567062  169521 status.go:176] multinode-874962-m02 status: &{Name:multinode-874962-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:38:59.567095  169521 status.go:174] checking status of multinode-874962-m03 ...
	I1020 12:38:59.567425  169521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:38:59.567467  169521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:38:59.581572  169521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46025
	I1020 12:38:59.582151  169521 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:38:59.582687  169521 main.go:141] libmachine: Using API Version  1
	I1020 12:38:59.582711  169521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:38:59.583154  169521 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:38:59.583368  169521 main.go:141] libmachine: (multinode-874962-m03) Calling .GetState
	I1020 12:38:59.584940  169521 status.go:371] multinode-874962-m03 host status = "Stopped" (err=<nil>)
	I1020 12:38:59.584954  169521 status.go:384] host is not running, skipping remaining checks
	I1020 12:38:59.584960  169521 status.go:176] multinode-874962-m03 status: &{Name:multinode-874962-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (133.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 node start m03 -v=5 --alsologtostderr
E1020 12:40:47.920187  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-874962 node start m03 -v=5 --alsologtostderr: (2m13.180235916s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (133.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874962
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-874962
E1020 12:43:43.775710  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-874962: (2m55.420252129s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874962 --wait=true -v=5 --alsologtostderr
E1020 12:45:47.921632  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874962 --wait=true -v=5 --alsologtostderr: (2m17.155901359s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874962
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-874962 node delete m03: (2.304165671s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (170.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 stop
E1020 12:46:46.844332  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:48:43.776448  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-874962 stop: (2m50.012764086s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874962 status: exit status 7 (92.768039ms)

                                                
                                                
-- stdout --
	multinode-874962
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874962-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr: exit status 7 (87.379929ms)

                                                
                                                
-- stdout --
	multinode-874962
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-874962-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:49:19.132752  172688 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:49:19.133011  172688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:49:19.133020  172688 out.go:374] Setting ErrFile to fd 2...
	I1020 12:49:19.133023  172688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:49:19.133445  172688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:49:19.133626  172688 out.go:368] Setting JSON to false
	I1020 12:49:19.133655  172688 mustload.go:65] Loading cluster: multinode-874962
	I1020 12:49:19.133758  172688 notify.go:220] Checking for updates...
	I1020 12:49:19.134079  172688 config.go:182] Loaded profile config "multinode-874962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:49:19.134101  172688 status.go:174] checking status of multinode-874962 ...
	I1020 12:49:19.134728  172688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:49:19.134772  172688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:49:19.149969  172688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I1020 12:49:19.150396  172688 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:49:19.150968  172688 main.go:141] libmachine: Using API Version  1
	I1020 12:49:19.151025  172688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:49:19.151495  172688 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:49:19.151713  172688 main.go:141] libmachine: (multinode-874962) Calling .GetState
	I1020 12:49:19.153452  172688 status.go:371] multinode-874962 host status = "Stopped" (err=<nil>)
	I1020 12:49:19.153472  172688 status.go:384] host is not running, skipping remaining checks
	I1020 12:49:19.153480  172688 status.go:176] multinode-874962 status: &{Name:multinode-874962 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:49:19.153515  172688 status.go:174] checking status of multinode-874962-m02 ...
	I1020 12:49:19.153813  172688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1020 12:49:19.153858  172688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1020 12:49:19.167525  172688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I1020 12:49:19.167924  172688 main.go:141] libmachine: () Calling .GetVersion
	I1020 12:49:19.168537  172688 main.go:141] libmachine: Using API Version  1
	I1020 12:49:19.168572  172688 main.go:141] libmachine: () Calling .SetConfigRaw
	I1020 12:49:19.168888  172688 main.go:141] libmachine: () Calling .GetMachineName
	I1020 12:49:19.169064  172688 main.go:141] libmachine: (multinode-874962-m02) Calling .GetState
	I1020 12:49:19.170821  172688 status.go:371] multinode-874962-m02 host status = "Stopped" (err=<nil>)
	I1020 12:49:19.170839  172688 status.go:384] host is not running, skipping remaining checks
	I1020 12:49:19.170846  172688 status.go:176] multinode-874962-m02 status: &{Name:multinode-874962-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (170.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874962 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874962 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.20669054s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-874962 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-874962
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874962-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-874962-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (64.849682ms)

                                                
                                                
-- stdout --
	* [multinode-874962-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-874962-m02' is duplicated with machine name 'multinode-874962-m02' in profile 'multinode-874962'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-874962-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:50:47.921622  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-874962-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.902386912s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-874962
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-874962: exit status 80 (256.364659ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-874962 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-874962-m03 already exists in multinode-874962-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-874962-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.16s)

                                                
                                    
x
+
TestScheduledStopUnix (111.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-208904 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-208904 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.29972227s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-208904 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-208904 -n scheduled-stop-208904
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-208904 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1020 12:54:49.388959  143131 retry.go:31] will retry after 51.761µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.390150  143131 retry.go:31] will retry after 187.925µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.391317  143131 retry.go:31] will retry after 186.996µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.392466  143131 retry.go:31] will retry after 335.094µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.393606  143131 retry.go:31] will retry after 496.032µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.394738  143131 retry.go:31] will retry after 535.438µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.395863  143131 retry.go:31] will retry after 878.266µs: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.396992  143131 retry.go:31] will retry after 1.814799ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.399184  143131 retry.go:31] will retry after 2.949841ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.402388  143131 retry.go:31] will retry after 4.920362ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.407603  143131 retry.go:31] will retry after 7.174159ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.415894  143131 retry.go:31] will retry after 8.741935ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.425146  143131 retry.go:31] will retry after 6.58953ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.432458  143131 retry.go:31] will retry after 27.620814ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
I1020 12:54:49.460721  143131 retry.go:31] will retry after 28.531288ms: open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/scheduled-stop-208904/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-208904 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-208904 -n scheduled-stop-208904
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-208904
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-208904 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1020 12:55:30.995677  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:55:47.923684  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-208904
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-208904: exit status 7 (78.753093ms)

                                                
                                                
-- stdout --
	scheduled-stop-208904
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-208904 -n scheduled-stop-208904
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-208904 -n scheduled-stop-208904: exit status 7 (79.79232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-208904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-208904
--- PASS: TestScheduledStopUnix (111.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (112.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3164172518 start -p running-upgrade-066492 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3164172518 start -p running-upgrade-066492 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.201966164s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-066492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-066492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.17667575s)
helpers_test.go:175: Cleaning up "running-upgrade-066492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-066492
--- PASS: TestRunningBinaryUpgrade (112.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.610869733s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-486976
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-486976: (1.840992399s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-486976 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-486976 status --format={{.Host}}: exit status 7 (93.520773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.950518652s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-486976 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (85.415847ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-486976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-486976
	    minikube start -p kubernetes-upgrade-486976 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4869762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-486976 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-486976 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.536159519s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-486976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-486976
--- PASS: TestKubernetesUpgrade (192.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (83.36898ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-518209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (79.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-518209 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-518209 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.846654427s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-518209 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (79.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-126965 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-126965 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (107.130046ms)

                                                
                                                
-- stdout --
	* [false-126965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:56:04.038070  176939 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:56:04.038374  176939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:56:04.038387  176939 out.go:374] Setting ErrFile to fd 2...
	I1020 12:56:04.038393  176939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:56:04.038671  176939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-139101/.minikube/bin
	I1020 12:56:04.039241  176939 out.go:368] Setting JSON to false
	I1020 12:56:04.040139  176939 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5899,"bootTime":1760959065,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1020 12:56:04.040256  176939 start.go:141] virtualization: kvm guest
	I1020 12:56:04.042100  176939 out.go:179] * [false-126965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1020 12:56:04.043236  176939 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:56:04.043240  176939 notify.go:220] Checking for updates...
	I1020 12:56:04.045162  176939 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:56:04.046231  176939 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-139101/kubeconfig
	I1020 12:56:04.047217  176939 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-139101/.minikube
	I1020 12:56:04.048125  176939 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1020 12:56:04.049051  176939 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:56:04.050712  176939 config.go:182] Loaded profile config "NoKubernetes-518209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:56:04.050863  176939 config.go:182] Loaded profile config "force-systemd-env-533981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:56:04.050984  176939 config.go:182] Loaded profile config "offline-crio-488144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:56:04.051095  176939 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:56:04.090279  176939 out.go:179] * Using the kvm2 driver based on user configuration
	I1020 12:56:04.091278  176939 start.go:305] selected driver: kvm2
	I1020 12:56:04.091295  176939 start.go:925] validating driver "kvm2" against <nil>
	I1020 12:56:04.091308  176939 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:56:04.093083  176939 out.go:203] 
	W1020 12:56:04.093946  176939 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1020 12:56:04.094831  176939 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-126965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-126965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-126965"

                                                
                                                
----------------------- debugLogs end: false-126965 [took: 3.060512107s] --------------------------------
helpers_test.go:175: Cleaning up "false-126965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-126965
--- PASS: TestNetworkPlugins/group/false (3.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1284763515 start -p stopped-upgrade-017504 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1284763515 start -p stopped-upgrade-017504 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.960390826s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1284763515 -p stopped-upgrade-017504 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1284763515 -p stopped-upgrade-017504 stop: (1.54979905s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-017504 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-017504 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.79154879s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (28.785019964s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-518209 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-518209 status -o json: exit status 2 (274.200597ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-518209","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-518209
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-518209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.182954892s)
--- PASS: TestNoKubernetes/serial/Start (36.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-518209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-518209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.837041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-518209
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-518209: (1.397586679s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-518209 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 12:58:43.771648  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-518209 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.096701537s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-017504
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-017504: (1.368040713s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestPause/serial/Start (64.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-651808 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-651808 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.268293803s)
--- PASS: TestPause/serial/Start (64.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-518209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-518209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.591877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.620033039s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (97.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m37.590056168s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (97.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.238082443s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-126965 "pgrep -a kubelet"
I1020 13:02:19.681303  143131 config.go:182] Loaded profile config "auto-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pr56k" [31e32125-8122-40dd-9346-287f294a55d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pr56k" [31e32125-8122-40dd-9346-287f294a55d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004123482s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m54.693947976s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5g2gj" [8b932df1-89f8-4b50-b4d2-dd0056a2c3c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00398906s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bff74" [363d6bdc-51ee-4596-a85f-d2acc2ae4058] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-bff74" [363d6bdc-51ee-4596-a85f-d2acc2ae4058] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005000859s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-126965 "pgrep -a kubelet"
I1020 13:02:54.807493  143131 config.go:182] Loaded profile config "kindnet-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z7cp2" [a5696a6e-3710-4071-9db7-e94d20ab243c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z7cp2" [a5696a6e-3710-4071-9db7-e94d20ab243c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005253717s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-126965 "pgrep -a kubelet"
I1020 13:02:58.298024  143131 config.go:182] Loaded profile config "calico-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sqd92" [e04d1913-0303-4da4-bca3-8679b4184d2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sqd92" [e04d1913-0303-4da4-bca3-8679b4184d2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00473012s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.011640626s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1020 13:03:43.771526  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.216476174s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-126965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.445425819s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-126965 "pgrep -a kubelet"
I1020 13:04:40.595065  143131 config.go:182] Loaded profile config "custom-flannel-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mkrmb" [06eb320d-c642-445c-abc9-09f2cd0e0b43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mkrmb" [06eb320d-c642-445c-abc9-09f2cd0e0b43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.077163506s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-126965 "pgrep -a kubelet"
I1020 13:04:50.090471  143131 config.go:182] Loaded profile config "enable-default-cni-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fcdzr" [95975aaf-9acc-432d-8924-91c6b7f15fe0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fcdzr" [95975aaf-9acc-432d-8924-91c6b7f15fe0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00504589s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-frbmc" [0952861e-f559-436a-a2d0-c62349f8a14f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004400293s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-514662 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-514662 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m36.155346614s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-126965 "pgrep -a kubelet"
I1020 13:05:13.088978  143131 config.go:182] Loaded profile config "flannel-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5ddz2" [a98d8fe4-389a-4485-853c-b0b0339676d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5ddz2" [a98d8fe4-389a-4485-853c-b0b0339676d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003944338s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-696638 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-696638 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m52.995616023s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-899380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-899380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m29.145204579s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-126965 "pgrep -a kubelet"
I1020 13:05:47.533915  143131 config.go:182] Loaded profile config "bridge-126965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-126965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6p44x" [67a4d355-2eef-4b49-b990-7635594061d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1020 13:05:47.920509  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6p44x" [67a4d355-2eef-4b49-b990-7635594061d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003869351s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-126965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-126965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-826827 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-826827 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m28.801243975s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-514662 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [46f13b75-1467-42f5-af22-bc7922af44bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [46f13b75-1467-42f5-af22-bc7922af44bf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003793509s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-514662 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-514662 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-514662 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071576217s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-514662 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-514662 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-514662 --alsologtostderr -v=3: (1m28.9111019s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-696638 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6883c4b3-dfdc-4ada-814c-09dcb17e395b] Pending
helpers_test.go:352: "busybox" [6883c4b3-dfdc-4ada-814c-09dcb17e395b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6883c4b3-dfdc-4ada-814c-09dcb17e395b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00351578s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-696638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-899380 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [66810691-5cf1-48b0-b583-d6d9a4c33485] Pending
helpers_test.go:352: "busybox" [66810691-5cf1-48b0-b583-d6d9a4c33485] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [66810691-5cf1-48b0-b583-d6d9a4c33485] Running
E1020 13:07:19.935329  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:19.941718  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:19.953091  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:19.974526  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:20.015987  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:20.097478  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:20.259315  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:20.581013  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:21.222660  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:22.504307  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00465402s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-899380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-696638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-696638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02891097s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-696638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-899380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-899380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006913967s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-899380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-696638 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-696638 --alsologtostderr -v=3: (1m22.983432884s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-899380 --alsologtostderr -v=3
E1020 13:07:25.065955  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:30.187592  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:40.429540  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-899380 --alsologtostderr -v=3: (1m23.889461145s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-826827 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f4960549-4745-4b96-a23d-0765292da653] Pending
helpers_test.go:352: "busybox" [f4960549-4745-4b96-a23d-0765292da653] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1020 13:07:48.581801  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.588205  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.599711  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.621089  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.662641  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.744109  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:48.905666  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:49.227525  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:49.869649  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f4960549-4745-4b96-a23d-0765292da653] Running
E1020 13:07:51.152015  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:51.969470  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:51.975856  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:51.987248  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:52.008691  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:52.050194  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:52.131661  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:52.293296  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:52.615055  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:53.257384  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:53.714342  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:54.539553  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003512135s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-826827 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-826827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-826827 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-826827 --alsologtostderr -v=3
E1020 13:07:57.101631  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:07:58.835737  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:00.911842  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:02.223224  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:09.078091  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:12.465353  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-826827 --alsologtostderr -v=3: (1m22.309407324s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-514662 -n old-k8s-version-514662
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-514662 -n old-k8s-version-514662: exit status 7 (118.82563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-514662 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-514662 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1020 13:08:29.560082  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:32.947657  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:41.874149  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:08:43.770916  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/functional-732631/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-514662 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.940995386s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-514662 -n old-k8s-version-514662
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696638 -n no-preload-696638
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696638 -n no-preload-696638: exit status 7 (69.445574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-696638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-696638 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-696638 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (59.408797114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696638 -n no-preload-696638
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-899380 -n embed-certs-899380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-899380 -n embed-certs-899380: exit status 7 (86.586214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-899380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-899380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1020 13:09:10.521674  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/kindnet-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-899380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m0.369991698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-899380 -n embed-certs-899380
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-krxjf" [4b053e8f-7119-499a-8532-08e76cc25ffe] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1020 13:09:13.909319  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-krxjf" [4b053e8f-7119-499a-8532-08e76cc25ffe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005403993s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827: exit status 7 (79.726976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-826827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-826827 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-826827 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.774486498s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
E1020 13:10:09.420365  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-krxjf" [4b053e8f-7119-499a-8532-08e76cc25ffe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004900135s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-514662 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-514662 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-514662 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-514662 --alsologtostderr -v=1: (1.168948495s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-514662 -n old-k8s-version-514662
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-514662 -n old-k8s-version-514662: exit status 2 (322.124622ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-514662 -n old-k8s-version-514662
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-514662 -n old-k8s-version-514662: exit status 2 (318.391857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-514662 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-514662 --alsologtostderr -v=1: (1.033296786s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-514662 -n old-k8s-version-514662
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-514662 -n old-k8s-version-514662
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-564601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1020 13:09:40.853896  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:40.860292  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:40.871720  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:40.893184  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:40.935494  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:41.017141  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:41.179438  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:41.500786  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:42.143176  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:43.425331  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:45.987548  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-564601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (55.749327481s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kp6kb" [d15c95ef-f93f-478c-9364-6e445fdfcc22] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kp6kb" [d15c95ef-f93f-478c-9364-6e445fdfcc22] Running
E1020 13:10:01.327136  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:01.351657  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.003987836s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gwz58" [4c53ff41-1b1a-43a6-97be-4a60c27faf2a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1020 13:09:51.072523  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.078910  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.090304  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.109762  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.112114  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.153572  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.235902  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.397836  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:51.719655  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:52.361312  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:53.642690  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:09:56.205147  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gwz58" [4c53ff41-1b1a-43a6-97be-4a60c27faf2a] Running
E1020 13:10:06.850686  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:06.857167  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:06.868587  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:06.890011  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:06.931487  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:07.012775  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:07.174068  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:07.496038  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.004644808s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kp6kb" [d15c95ef-f93f-478c-9364-6e445fdfcc22] Running
E1020 13:10:03.796496  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/auto-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00497214s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-696638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-696638 image list --format=json
E1020 13:10:08.138131  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-696638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-696638 --alsologtostderr -v=1: (1.075639483s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696638 -n no-preload-696638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696638 -n no-preload-696638: exit status 2 (313.341566ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696638 -n no-preload-696638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696638 -n no-preload-696638: exit status 2 (316.551711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-696638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-696638 --alsologtostderr -v=1: (1.420094644s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696638 -n no-preload-696638
E1020 13:10:11.568901  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696638 -n no-preload-696638
E1020 13:10:11.982574  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-58l4p" [4e524878-6004-42c5-9217-bf3018fd587d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-58l4p" [4e524878-6004-42c5-9217-bf3018fd587d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004200306s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gwz58" [4c53ff41-1b1a-43a6-97be-4a60c27faf2a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003710064s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-899380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-899380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-899380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-899380 -n embed-certs-899380
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-899380 -n embed-certs-899380: exit status 2 (281.407668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-899380 -n embed-certs-899380
E1020 13:10:17.104680  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-899380 -n embed-certs-899380: exit status 2 (285.469222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-899380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-899380 -n embed-certs-899380
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-899380 -n embed-certs-899380
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-58l4p" [4e524878-6004-42c5-9217-bf3018fd587d] Running
E1020 13:10:27.346635  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00359119s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-826827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-826827 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-826827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827: exit status 2 (261.476471ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827: exit status 2 (266.127245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-826827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-826827 -n default-k8s-diff-port-826827
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-564601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-564601 --alsologtostderr -v=3
E1020 13:10:35.831335  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/calico-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-564601 --alsologtostderr -v=3: (10.669198582s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564601 -n newest-cni-564601
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564601 -n newest-cni-564601: exit status 7 (78.307191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-564601 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-564601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1020 13:10:47.751935  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.758435  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.769792  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.791176  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.828606  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.832935  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.914354  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:47.920829  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/addons-323619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:48.076418  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:48.398115  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:49.040334  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:50.321963  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:52.883666  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:10:58.005041  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:11:02.795825  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/custom-flannel-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:11:08.246568  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/bridge-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:11:13.012713  143131 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-139101/.minikube/profiles/enable-default-cni-126965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-564601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (32.263176671s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-564601 -n newest-cni-564601
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-564601 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-564601 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564601 -n newest-cni-564601
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564601 -n newest-cni-564601: exit status 2 (244.20902ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564601 -n newest-cni-564601
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564601 -n newest-cni-564601: exit status 2 (246.573047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-564601 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-564601 -n newest-cni-564601
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-564601 -n newest-cni-564601
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.2
267 TestNetworkPlugins/group/cilium 3.41
282 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323619 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-126965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-126965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-126965"

                                                
                                                
----------------------- debugLogs end: kubenet-126965 [took: 3.044073276s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-126965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-126965
--- SKIP: TestNetworkPlugins/group/kubenet (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-126965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-126965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-126965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-126965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-126965"

                                                
                                                
----------------------- debugLogs end: cilium-126965 [took: 3.255239253s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-126965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-126965
--- SKIP: TestNetworkPlugins/group/cilium (3.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-720325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-720325
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard